id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
2,946,366 | https://en.wikipedia.org/wiki/Saturated%20measure | In mathematics, a measure is said to be saturated if every locally measurable set is also measurable. A set , not necessarily measurable, is said to be a if for every measurable set of finite measure, is measurable. -finite measures and measures arising as the restriction of outer measures are saturated.
References
Measures (measure theory) | Saturated measure | [
"Physics",
"Mathematics"
] | 71 | [
"Mathematical analysis",
"Physical quantities",
"Mathematical analysis stubs",
"Measures (measure theory)",
"Quantity",
"Size"
] |
2,946,521 | https://en.wikipedia.org/wiki/Excludability | In economics, excludability is the degree to which a good, service or resource can be limited to only paying customers, or conversely, the degree to which a supplier, producer or other managing body (e.g. a government) can prevent consumption of a good. In economics, a good, service or resource is broadly assigned two fundamental characteristics; a degree of excludability and a degree of rivalry.
Excludability was originally proposed in 1954 by American economist Paul Samuelson where he formalised the concept now known as public goods, i.e. goods that are both non-rivalrous and non-excludable. Samuelson additionally highlighted the market failure of the free-rider problem that can occur with non-excludable goods. Samuelson's theory of good classification was then further expanded upon by Richard Musgrave in 1959, Garrett Hardin in 1968 who expanded upon another key market inefficiency of non-excludeable goods; the tragedy of the commons.
Excludability is not an inherent characteristic of a good. Therefore, excludability was further expanded upon by Elinor Ostrom in 1990 to be a continuous characteristic, as opposed to the discrete characteristic proposed by Samuelson (who presented excludability as either being present or absent). Ostrom's theory proposed that excludability can be placed on a scale that would range from fully excludable (i.e. a good that could theoretically fully exclude non-paying consumers) to fully non-excludeable (a good that cannot exclude non-paying customers at all). This scale allows producers and providers more in-depth information that can then be used to generate more efficient price equations (for public goods in particular), that would then maximize benefits and positive externalities for all consumers of the good
Definition matrix
Examples
Excludable
The easiest characteristic of an excludable good is that the producer, supplier or managing body of the good, service or resource have been able to restrict consumption to only paying consumers, and excluded non-paying consumers. If a good has a price attached to it, whether it's a one time payment like in the case of clothing or cars, or an ongoing payment like a subscription fee for a magazine or a per-use fee like in the case of public transport, it can be considered to be excludable to some extent.
A common example is a movie in a cinema. Paying customers are given a ticket that would entitle them to a single showing of the movie, and this is checked and ensured by ushers, security and other employees of the cinema. This means that a viewing of the movie is excludable and non-paying consumers are unable to experience the movie.
Semi-Excludable
Ranging between being fully excludable and non-excludable is a continuous scale of excludability that Ostrom developed. Within this scale are goods that either attempt to be excludable but cannot effective or efficiently enforce this excludability. One example concerns many forms of information such as music, movies, e-books and computer software. All of these goods have some price or payment involved in their consumption, but are also susceptible to piracy and copyright infringements. This can result in many non-paying consumers being able to experience and benefit from the goods of a single purchase or payment.
Non-Excludable
A good, service or resource that is unable to prevent or exclude non-paying consumers from experiencing or using it can be considered non-excludable. An architecturally pleasing building, such as Tower Bridge, creates an aesthetic non-excludable good, which can be enjoyed by anyone who happens to look at it. It is difficult to prevent people from gaining this benefit. A lighthouse acts as a navigation aid to ships at sea in a manner that is non-excludable since any ship out at sea can benefit from it.
Implications and inefficiency
Public goods will generally be underproduced and undersupplied in the absence of government subsidies, relative to a socially optimal level. This is because potential producers will not be able to realize a profit (since the good can be obtained for free) sufficient to justify the costs of production. In this way the provision of non-excludable goods is a classic example of a positive externality which leads to inefficiency. In extreme cases this can result in the good not being produced at all, or it being necessary for the government to organize its production and distribution.
A classic example of the inefficiency caused by non-excludability is the tragedy of the commons (which Hardin, the author, later corrected to the 'tragedy of the unmanaged commons' because it is based on the notion of an entirely rule-less resource) where a shared, non-excludable, resource becomes subject to over-use and over-consumption, which destroys the resource in the process.
Economic theory
Brito and Oakland (1980) study the private, profit-maximizing provision of excludable public goods in a formal economic model. They take into account that the agents have private information about their valuations of the public good. Yet, Brito and Oakland only consider posted-price mechanisms, i.e. there are ad-hoc constraints on the class of contracts. Also taking distribution costs and congestion effects into account, Schmitz (1997) studies a related problem, but he allows for general mechanisms. Moreover, he also characterizes the second-best allocation rule, which is welfare-maximizing under the constraint of nonnegative profits. Using the incomplete contracts theory, Francesconi and Muthoo (2011) explore whether public or private ownership is more desirable when non-contractible investments have to be made in order to provide a (partly) excludable public good.
See also
Rivalry
Free rider problem
Tragedy of the Commons
References
Further reading
Excludability, in: Joseph E. Stiglitz: Knowledge as a Global Public Good, World Bank. Last accessed 29 May 2007. Copy at the Internet Archive
Goods (economics) | Excludability | [
"Physics"
] | 1,263 | [
"Materials",
"Goods (economics)",
"Matter"
] |
2,947,017 | https://en.wikipedia.org/wiki/Architectural%20Experience%20Program | Formerly called the Intern Development Program (IDP), the Architectural Experience Program (AXP) is designed to ensure that candidates pursuing licensure in the architecture profession gain the knowledge and skills required for the independent practice of architecture. The program is developed, maintained, and administered by the National Council of Architectural Registration Boards (NCARB) and is required by most U.S. architectural registration boards to satisfy experience requirements for licensure.
History
In 1976, NCARB introduced the Intern Development Program (IDP) after working with the American Institute of Architects (AIA) throughout the 1970s to develop a more structured program for candidates to ensure they were gaining the knowledge and skills necessary to practice independently. Administered by NCARB, jurisdictions gradually began adopting the program to satisfy their experience requirement.
Mississippi became the first state to require the IDP in 1978. All 54 U.S. jurisdictions accept the IDP toward the fulfillment of their experience requirement.
The first major change to the program came in 1996 when it became required to record actual training units earned rather than the percentage of time spent in each training area. The program has been monitored annually by NCARB’s Internship Committee, which has recommended other minor changes over the years based on interpretations of the current practice of architecture.
In May 2009, NCARB announced the rollout of IDP 2.0, the most significant update to the program since its inception in the 1970s. IDP 2.0 more closely aligns the program's requirements with the current practice of architecture and ensures the comprehensive training that is essential for competent practice.
IDP 2.0 was developed in response to the 2007 Practice Analysis of Architecture. In this study, almost 10,000 practicing architects completed an extensive electronic survey to identify the tasks, knowledge, and skills that recently licensed architects, practicing independently, need in order to protect the health, safety, and welfare of the public.
The updates were rolled out in phases with the first phase occurring in July 2009 and the final in April 2012. In July 2015, the IDP was streamlined to reduce experience hours required from 5,600 to 3,740.
In order to address the findings of the 2012 Practice Analysis, NCARB began an in-depth review and overhaul of the experience program to ensure that the requirements continued to adhere to current architectural practice. In addition, NCARB decided to rename the IDP the Architectural Experience Program (AXP) as part of an effort to sunset the term “intern.”
The introduction of the new name and the overhaul were both launched on June 29, 2016. In the AXP, the previous 17 experience categories were realigned into six broad areas that reflect the current practice of architecture.
Participants
An individual seeking architectural licensure is referred to as a “licensure candidate.” All U.S. states and Canadian provinces prohibit the use of the word “architect” from any person not already licensed to practice architecture. Most states and provinces also prohibit any derivation of the word architect as well.
A supervisor is someone who reviews and directs the work of others and ensures that work is done within acceptable levels of quality. An AXP supervisor is the individual who supervises a candidate on a daily basis. The AXP supervisor is required to certify that the information submitted on an experience report is true and correct.
A mentor is a loyal adviser, teacher, or coach. An AXP mentor must be a registered architect who makes a long-term commitment to a candidate’s professional growth. If possible, the mentor should not work in the same office so that the candidate can gain useful insight into the daily work experience.
Eligibility
The first step to beginning the AXP is to establish an NCARB Record. Candidates are eligible to start earning credit for the AXP once they have graduated from high school. In order to gain experience, they must work under the direct supervision of an AXP supervisor in one of the NCARB-approved work settings.
All experience must be reported electronically to NCARB at least every eight months through their NCARB Record, and experience may be submitted more often. Half credit will be given for experience reported that is up to five years old.
Experience Areas
Licensure candidates must acquire 3,740 experience hours across six experience areas to complete the AXP. These areas were effective June 2016.
Practice Management
Required hours: 160
Practice Management is where licensure candidates gain experience running an architecture firm—including the ins and outs of managing a business, marketing firms, securing projects, working with clients, and sustaining a positive and professional work environment.
Project Management
Required hours: 360
In Project Management, licensure candidates learn how to deliver projects that meet contractual requirements, so they’ll be prepared to budget, coordinate, oversee, and execute a project.
Programming & Analysis
Required hours: 260
Programming & Analysis is the first phase of a project, often referred to as pre-design. Licensure candidates will experience tasks related to researching and evaluating client requirements, building code and zoning regulations, and site data to develop recommendations on the feasibility of a project.
Project Planning & Design
Required hours: 1,080
Project Planning & Design covers the schematic design phase of a project. Licensure candidates will learn to layout the building design, review building codes and regulations, coordinate schematics with consultants, and communicate design concepts with clients.
Project Development & Documentation
Required hours: 1,520
In Project Development & Documentation, licensure candidates will gain experience with projects after the schematic design has been approved—focusing on construction documents and coordinating with regulatory authorities to gain the necessary approvals for construction.
Construction & Evaluation
Required hours: 360
In Construction & Evaluation, licensure candidates will get involved with the construction administration and post-construction phases of a project—this includes being out on the job site; meeting with contractors, clients, and building officials; and punching lists, leading to the completion of the project.
Total hours: 3,740
Resources
AXP Guidelines: Produced by NCARB, the document is essential reading for participants of the AXP. It includes steps to completing the program, reporting procedures, training requirements, and core competencies that should understand before becoming licensed. The document is updated about twice a year.
Architect Licensing Advisor: An individual who provides information and guidance for those working toward licensure. Licensing advisors are usually located at:
NAAB-accredited architectural degree programs
AIA chapters
AIAS chapters
Firms
State registration boards
You can find your local architect licensing advisor through the NCARB website.
See also
Architect
Intern architect
National Council of Architectural Registration Boards
American Institute of Architecture Students
Intern Architect Program
Architect Registration Examination
References
External links
AXP Guidelines - NCARB
Start the AXP - NCARB
Reporting Requirement - NCARB
Your Supervisor's Role - NCARB
Find Your Architect Licensing Advisor - NCARB
Earn Your Architecture License - NCARB
Architectural education
Internships | Architectural Experience Program | [
"Engineering"
] | 1,407 | [
"Architectural education",
"Architecture"
] |
2,947,499 | https://en.wikipedia.org/wiki/American%20Institute%20of%20Constructors | The American Institute of Constructors (AIC), is a not-for-profit 501(c)(6) non-governmental professional association founded in 1971. Individuals involved in the AIC are typically found in the construction management Industry.
The AIC offers three different levels of certification: Associate Constructor (AC), Certified Professional Constructor (CPC), and Fellow (FC). American Institute of Constructors also offers a number of educational programs, including online courses, webinars, and in-person seminars.
References
External links
Construction organizations
Construction management | American Institute of Constructors | [
"Engineering"
] | 116 | [
"Construction",
"Construction management",
"Construction organizations"
] |
2,947,580 | https://en.wikipedia.org/wiki/Rodent%20cocktail | Rodent cocktail is an anesthetic mixture used for rodents in research. The injectable, clear liquid is a mixture of ketamine, xylazine, and acepromazine. The ratio used depends on the species of rodent. This mixture is often preferred by researchers because of its low mortality in rodents, its relatively quick recovery time (one hour after injection), and low cost.
References
General anesthetics
Laboratory rodents
Veterinary drugs | Rodent cocktail | [
"Biology"
] | 94 | [
"Molecular genetics",
"Laboratory rodents"
] |
2,948,178 | https://en.wikipedia.org/wiki/Biomedical%20text%20mining | Biomedical text mining (including biomedical natural language processing or BioNLP) refers to the methods and study of how text mining may be applied to texts and literature of the biomedical domain. As a field of research, biomedical text mining incorporates ideas from natural language processing, bioinformatics, medical informatics and computational linguistics. The strategies in this field have been applied to the biomedical literature available through services such as PubMed.
In recent years, the scientific literature has shifted to electronic publishing but the volume of information available can be overwhelming. This revolution of publishing has caused a high demand for text mining techniques. Text mining offers information retrieval (IR) and entity recognition (ER). IR allows the retrieval of relevant papers according to the topic of interest, e.g. through PubMed. ER is practiced when certain biological terms are recognized (e.g. proteins or genes) for further processing.
Considerations
Applying text mining approaches to biomedical text requires specific considerations common to the domain.
Availability of annotated text data
Large annotated corpora used in the development and training of general purpose text mining methods (e.g., sets of movie dialogue, product reviews, or Wikipedia article text) are not specific for biomedical language. While they may provide evidence of general text properties such as parts of speech, they rarely contain concepts of interest to biologists or clinicians. Development of new methods to identify features specific to biomedical documents therefore requires assembly of specialized corpora. Resources designed to aid in building new biomedical text mining methods have been developed through the Informatics for Integrating Biology and the Bedside (i2b2) challenges and biomedical informatics researchers. Text mining researchers frequently combine these corpora with the controlled vocabularies and ontologies available through the National Library of Medicine's Unified Medical Language System (UMLS) and Medical Subject Headings (MeSH).
Machine learning-based methods often require very large data sets as training data to build useful models. Manual annotation of large text corpora is not realistically possible. Training data may therefore be products of weak supervision or purely statistical methods.
Data structure variation
Like other text documents, biomedical documents contain unstructured data. Research publications follow different formats, contain different types of information, and are interspersed with figures, tables, and other non-text content. Both unstructured text and semi-structured document elements, such as tables, may contain important information that should be text mined. Clinical documents may vary in structure and language between departments and locations. Other types of biomedical text, such as drug labels, may follow general structural guidelines but lack further details.
Uncertainty
Biomedical literature contains statements about observations that may not be statements of fact. This text may express uncertainty or skepticism about claims. Without specific adaptations, text mining approaches designed to identify claims within text may mis-characterize these "hedged" statements as facts.
Supporting clinical needs
Biomedical text mining applications developed for clinical use should ideally reflect the needs and demands of clinicians. This is a concern in environments where clinical decision support is expected to be informative and accurate. A comprehensive overview of the development and uptake of NLP methods applied to free-text clinical notes related to chronic diseases
is presented in.
Interoperability with clinical systems
New text mining systems must work with existing standards, electronic medical records, and databases. Methods for interfacing with clinical systems such as LOINC have been developed but require extensive organizational effort to implement and maintain.
Patient privacy
Text mining systems operating with private medical data must respect its security and ensure it is rendered anonymous where appropriate.
Processes
Specific sub tasks are of particular concern when processing biomedical text.
Named entity recognition
Developments in biomedical text mining have incorporated identification of biological entities with named entity recognition, or NER. Names and identifiers for biomolecules such as proteins and genes, chemical compounds and drugs, and disease names have all been used as entities. Most entity recognition methods are supported by pre-defined linguistic features or vocabularies, though methods incorporating deep learning and word embeddings have also been successful at biomedical NER.
Document classification and clustering
Biomedical documents may be classified or clustered based on their contents and topics. In classification, document categories are specified manually, while in clustering, documents form algorithm-dependent, distinct groups. These two tasks are representative of supervised and unsupervised methods, respectively, yet the goal of both is to produce subsets of documents based on their distinguishing features. Methods for biomedical document clustering have relied upon k-means clustering.
Relationship discovery
Biomedical documents describe connections between concepts, whether they are interactions between biomolecules, events occurring subsequently over time (i.e., temporal relationships), or causal relationships. Text mining methods may perform relation discovery to identify these connections, often in concert with named entity recognition.
Hedge cue detection
The challenge of identifying uncertain or "hedged" statements has been addressed through hedge cue detection in biomedical literature.
Claim detection
Multiple researchers have developed methods to identify specific scientific claims from literature. In practice, this process involves both isolating phrases and sentences denoting the core arguments made by the authors of a document (a process known as argument mining, employing tools used in fields such as political science) and comparing claims to find potential contradictions between them.
Information extraction
Information extraction, or IE, is the process of automatically identifying structured information from unstructured or partially structured text. IE processes can involve several or all of the above activities, including named entity recognition, relationship discovery, and document classification, with the overall goal of translating text to a more structured form, such as the contents of a template or knowledge base. In the biomedical domain, IE is used to generate links between concepts described in text, such as gene A inhibits gene B and gene C is involved in disease G. Biomedical knowledge bases containing this type of information are generally products of extensive manual curation, so replacement of manual efforts with automated methods remains a compelling area of research.
Information retrieval and question answering
Biomedical text mining supports applications for identifying documents and concepts matching search queries. Search engines such as PubMed search allow users to query literature databases with words or phrases present in document contents, metadata, or indices such as MeSH. Similar approaches may be used for medical literature retrieval. For more fine-grained results, some applications permit users to search with natural language queries and identify specific biomedical relationships.
On 16 March 2020, the National Library of Medicine and others launched the COVID-19 Open Research Dataset (CORD-19) to enable text mining of the current literature on the novel virus. The dataset is hosted by the Semantic Scholar project of the Allen Institute for AI. Other participants include Google, Microsoft Research, the Center for Security and Emerging Technology, and the Chan Zuckerberg Initiative.
Resources
Corpora
The following table lists a selection of biomedical text corpora and their contents. These items include annotated corpora, sources of biomedical research literature, and resources frequently used as vocabulary and/or ontology references, such as MeSH. Items marked "Yes" under "Freely Available" can be downloaded from a publicly accessible location.
Word embeddings
Several groups have developed sets of biomedical vocabulary mapped to vectors of real numbers, known as word vectors or word embeddings. Sources of pre-trained embeddings specific for biomedical vocabulary are listed in the table below. The majority are results of the word2vec model developed by Mikolov et al or variants of word2vec.
Applications
Text mining applications in the biomedical field include computational approaches to assist with studies in protein docking, protein interactions, and protein-disease associations. Text mining techniques have several advantages over traditional manual curation for identifying associations. Text mining algorithms can identify and extract information from a vast amount of literature, and more efficiently than manual curation. This includes the integration of data from different sources, including literature, databases, and experimental results. These algorithms have transformed the process of identifying and prioritizing novel genes and gene-disease associations that have previously been overlooked.
These methods are the foundation to facilitate systematic searches of overlooked scientific and biomedical literature which could carry significant association between research. The combination of information can stem new discoveries and hypotheses especially with the integration of datasets. It must be noted that the quality of the database is as important as the size of it. Promising text mining methods such as iProLINK (integrated Protein Literature Information and Knowledge) have been developed to curate data sources that can aid text mining research in areas of bibliography mapping, annotation extraction, protein named entity recognition, and protein ontology development. Curated databases such as UniProt can accelerate the accessibility of targeted information not only for genetic sequences, but also for literature and phylogeny.
Gene cluster identification
Methods for determining the association of gene clusters obtained by microarray experiments with the biological context provided by the corresponding literature have been developed.
Protein interactions
Automatic extraction of protein interactions and associations of proteins to functional concepts (e.g. gene ontology terms) has been explored. The search engine PIE was developed to identify and return protein-protein interaction mentions from MEDLINE-indexed articles. The extraction of kinetic parameters from text or the subcellular location of proteins have also been addressed by information extraction and text mining technology.
Gene-disease associations
Computational gene prioritization is an essential step in understanding the genetic basis of diseases, particularly within genetic linkage analysis. Text mining and other computational tools extract relevant information, including gene-disease associations, among others, from numerous data sources, then apply different ranking algorithms to prioritize the genes based on their relevance to the specific disease. Text mining and gene prioritization allow researchers to focus their efforts on the most promising candidates for further research.
Computational tools for gene prioritization continue to be developed and analyzed. One group studied the performance of various text-mining techniques for disease gene prioritization. They investigated different domain vocabularies, text representation schemes, and ranking algorithms in order to find the best approach for identifying disease-causing genes to establish a benchmark.
Gene-trait associations
An agricultural genomics group identified genes related to bovine reproductive traits using text mining, among other approaches.
Applications of phrase mining to disease associations
A text mining study assembled a collection of 709 core extracellular matrix proteins and associated proteins based on two databases: MatrixDB (matrixdb.univ-lyon1.fr) and UniProt. This set of proteins had a manageable size and a rich body of associated information, making it a suitable for the application of text mining tools. The researchers conducted phrase-mining analysis to cross-examine individual extracellular matrix proteins across the biomedical literature concerned with six categories of cardiovascular diseases. They used a phrase-mining pipeline, Context-aware Semantic Online Analytical Processing (CaseOLAP), then semantically scored all 709 proteins according to their Integrity, Popularity, and Distinctiveness using the CaseOLAP pipeline. The text mining study validated existing relationships and informed previously unrecognized biological processes in cardiovascular pathophysiology.
Software tools
Search engines
Search engines designed to retrieve biomedical literature relevant to a user-provided query frequently rely upon text mining approaches. Publicly available tools specific for research literature include PubMed search, Europe PubMed Central search, GeneView, and APSE Similarly, search engines and indexing systems specific for biomedical data have been developed, including DataMed and OmicsDI.
Some search engines, such as Essie, OncoSearch, PubGene, and GoPubMed were previously public but have since been discontinued, rendered obsolete, or integrated into commercial products.
Medical record analysis systems
Electronic medical records (EMRs) and electronic health records (EHRs) are collected by clinical staff in the course of diagnosis and treatment. Though these records generally include structured components with predictable formats and data types, the remainder of the reports are often free-text and difficult to search, leading to challenges with patient care. Numerous complete systems and tools have been developed to analyse these free-text portions. The MedLEE system was originally developed for analysis of chest radiology reports but later extended to other report topics. The clinical Text Analysis and Knowledge Extraction System, or cTAKES, annotates clinical text using a dictionary of concepts. The CLAMP system offers similar functionality with a user-friendly interface.
Frameworks
Computational frameworks have been developed to rapidly build tools for biomedical text mining tasks. SwellShark is a framework for biomedical NER that requires no human-labeled data but does make use of resources for weak supervision (e.g., UMLS semantic types). The SparkText framework uses Apache Spark data streaming, a NoSQL database, and basic machine learning methods to build predictive models from scientific articles.
APIs
Some biomedical text mining and natural language processing tools are available through application programming interfaces, or APIs. NOBLE Coder performs concept recognition through an API.
Conferences
The following academic conferences and workshops host discussions and presentations in biomedical text mining advances. Most publish proceedings.
Journals
A variety of academic journals publishing manuscripts on biology and medicine include topics in text mining and natural language processing software. Some journals, including the Journal of the American Medical Informatics Association (JAMIA) and the Journal of Biomedical Informatics are popular publications for these topics.
References
Further reading
Biomedical Literature Mining Publications (BLIMP) : A comprehensive and regularly updated index of publications on (bio)medical text mining
External links
Bio-NLP resources, systems and application database collection
The BioNLP mailing list archives
Corpora for biomedical text mining
The BioCreative evaluations of biomedical text mining technologies
Directory of people involved in BioNLP
Data mining
Bioinformatics
Text mining
Clinical data management | Biomedical text mining | [
"Engineering",
"Biology"
] | 2,784 | [
"Bioinformatics",
"Biological engineering"
] |
2,948,381 | https://en.wikipedia.org/wiki/BioCreative | BioCreAtIvE (A critical assessment of text mining methods in molecular biology) consists in a community-wide effort for evaluating information extraction and text mining developments in the biological domain.
It was preceded by the Knowledge Discovery and Data Mining (KDD) Challenge Cup for detection of gene mentions.
Community Challenges
First edition (2004-2005)
Three main tasks were posed at the first BioCreAtIvE challenge: the entity extraction task, the gene name normalization task, and the functional annotation of gene products task. The data sets produced by this contest serve as a Gold Standard training and test set to evaluate and train Bio-NER tools and annotation extraction tools.
Second edition (2006-2007)
The second BioCreAtIvE challenge (2006-2007) had also 3 tasks: detection of gene mentions, extraction of unique idenfiers for genes and extraction information related to physical protein-protein interactions. It counted with participation of 44 teams from 13 countries.
Third edition (2011-2012)
The third edition of BioCreative included for the first time the InterActive Task (IAT), designed to evaluate the practical usability of text mining tools in real-world biocuration tasks.
Fifth edition (2016)
BioCreative V had 5 different tracks, including an interactive task (IAT) for usability of text mining systems and a track using the BioC format for curating information for BioGRID.
See also
Biocuration
References
External links
BioCreAtIve, 2007-2015
BioCreAtIve 2, 2006-2007
First BioCreAtIvE workshop, 2004
BMC Bioinformatics special issue : BioCreAtIvE
First BioCreAtIvE data download request
Bioinformatics
Information science | BioCreative | [
"Chemistry",
"Engineering",
"Biology"
] | 343 | [
"Biological engineering",
"Bioinformatics stubs",
"Biotechnology stubs",
"Biochemistry stubs",
"Bioinformatics"
] |
2,948,442 | https://en.wikipedia.org/wiki/Spanish%20National%20Bioinformatics%20Institute | The Spanish National Bioinformatics Institute (INB-ISCIII; Spanish: Instituto Nacional de Bioinformática) is an academic service institution tasked with the coordination, integration and development of bioinformatics resources in Spain. Created in 2003, the INB is—since 2015—the main node through which the Carlos III Health Institute is connected to ELIXIR, a European-wide infrastructure of life science data, coordinating the other Spanish institutions partaking in the initiative such as the Spanish National Cancer Research Centre (CNIO), the Centre for Genomic Regulation (CRG), the Universitat Pompeu Fabra, the Institute for Research in Biomedicine (IRB) and the Barcelona's National Supercomputing Center.
It consists of 10 distributed nodes, coordinated by a central node, encompassing the scopes of genomics, proteomics, functional genomics, structural biology, population genomics and genome diversity, health informatics, algorithm development and high-performance computing.
It is the Spanish participant in the common data platform promoted by the European Union to ensure a rapid and coordinated response to the health crisis caused by COVID-19. Their MareNostrum supercomputer has been used for testing the potential efficacy of compounds against SARS-CoV-2.
Alfonso Valencia, former president of the International Society for Computational Biology, is the director.
References
External links
INB
Research institutes in Spain
COVID-19 pandemic in Spain
Bioinformatics organizations | Spanish National Bioinformatics Institute | [
"Biology"
] | 311 | [
"Bioinformatics",
"Bioinformatics organizations"
] |
2,948,513 | https://en.wikipedia.org/wiki/Sodium%20myreth%20sulfate | Sodium myreth sulfate is a mixture of organic compounds with both detergent and surfactant properties. It is found in many personal care products such as soaps, shampoos, and toothpaste. It is an inexpensive and effective foaming agent. Typical of many detergents, sodium myreth sulfate consists of several closely related compounds. Sometimes the number of ethylene glycol ether units (n) is specified in the name as myreth-n sulfate, for example myreth-2 sulfate.
Production
Sodium myreth sulfate is very similar to sodium laureth sulfate; the only difference is two more carbons in the fatty alcohol portion of the hydrophobic tail. It is manufactured by ethoxylation (hence the "eth" in "myreth") of myristyl alcohol. Subsequently, the terminal OH group is converted to the sulfate by treatment with chlorosulfuric acid.
Safety
Like other ethoxylates, sodium myreth sulfate may become contaminated with 1,4-dioxane during production, which is considered to be a Group 2B suspect carcinogen by the IARC.
See also
Ammonium lauryl sulfate
References
External links
Household product database at NIH web site.
Cosmetics chemicals
Ethers
Household chemicals
Organic sodium salts
Anionic surfactants
Sulfate esters | Sodium myreth sulfate | [
"Chemistry"
] | 278 | [
"Functional groups",
"Salts",
"Organic compounds",
"Organic sodium salts",
"Ethers"
] |
2,948,529 | https://en.wikipedia.org/wiki/Antisymmetry | In linguistics, antisymmetry is a syntactic theory presented in Richard S. Kayne's 1994 monograph The Antisymmetry of Syntax. It asserts that grammatical hierarchies in natural language follow a universal order, namely specifier-head-complement branching order. The theory builds on the foundation of the X-bar theory. Kayne hypothesizes that all phrases whose surface order is not specifier-head-complement have undergone syntactic movements that disrupt this underlying order. Others have posited specifier-complement-head as the basic word order.
Antisymmetry as a principle of word order is reliant on X-bar notions such as specifier and complement, and the existence of order-altering mechanisms such as movement. It is disputed by constituency structure theories (as opposed to dependency structure theories).
Asymmetric c-command
C-command is a relation between tree nodes, as defined by Tanya Reinhart. Kayne uses a simple definition of c-command based on the "first node up". However, the definition is complicated by his use of a "segment/category" distinction. Two directly connected nodes that have the same label are "segments" of a single "category". A category "excludes" all categories not "dominated" by all its segments. A "c-commands" B if every category that dominates A also dominates B, and A excludes B. The following tree illustrates these concepts:
AP1 and AP2 are both segments of a single category. AP does not c-command BP because it does not exclude BP. CP does not c-command BP because both segments of AP do not dominate BP (so it is not the case that every category that dominates CP dominates BP). BP c-commands CP and A. A c-commands C. The definitions above may perhaps be thought to allow BP to c-command AP, but a c-command relation is not usually assumed to hold between two such categories, and for the purposes of antisymmetry, the question of whether BP c-commands AP is in fact moot.
(The above is not an exhaustive list of c-command relations in the tree, but covers all of those that are significant in the following exposition.)
Asymmetric c-command is the relation that holds between two categories, A and B, if A c-commands B but B does not c-command A.
Precedence and asymmetric c-command
Informally, Kayne's theory states that if a nonterminal category A asymmetrically c-commands another nonterminal category B, all the terminal nodes dominated by A must precede all of the terminal nodes dominated by B (this statement is commonly referred to as the "Linear Correspondence Axiom" or LCA). Moreover, this principle must suffice to establish a complete and consistent ordering of all terminal nodes — if it cannot consistently order all of the terminal nodes in a tree, the tree is illicit. Consider the following tree:
(S and S' may either be simplex structures like BP, or complex structures with specifiers and complements like CP.)
In this tree, the set of pairs of nonterminal categories such that the first member of the pair asymmetrically c-commands the second member is: , , }. This gives rise to the total ordering: .
As a result, there is no right adjunction, and hence in practice no rightward movement either. Furthermore, the underlying order must be specifier-head-complement.
Derivation of X-bar theory
The example tree in the first section of this article is in accordance with X-bar theory (with the exception that [Spec,CP] (i.e., the specifier of the CP phrase) is treated as an adjunct). It can be seen that removing any of the structures in the tree (e.g., deleting the C dominating the 'c' terminal, so that the complement of A is [CP c]) will destroy the asymmetric c-command relations necessary for linearly ordering the terminals of the tree.
The universal order
Kayne notes that his theory permits either a universal specifier-head-complement order or a universal complement-head-specifier order, depending on whether asymmetric c-command establishes precedence or subsequence (S-H-C results from precedence). He prefers S-H-C as the universal underlying order since the most widely attested order in linguistic typology is for specifiers to precede heads and complements (though the order of heads and complements themselves is relatively free). He further argues that a movement approach to deriving non-S-H-C orders is appropriate since it derives asymmetries in typology (such as the fact that "verb-second" languages such as German are not mirrored by any known "verb second-from-last" languages).
Derived orders: the case of Japanese wh-questions
Perhaps the biggest challenge for antisymmetry is to explain the wide variety of different surface orders across languages. Any deviation from Spec-Head-Comp order (which implies overall Subject-Verb-Object order, if objects are complements) must be explained by movement. Kayne argues that in some cases the need for extra movements (previously unnecessary because different underlying orders were assumed for different languages) can explain some otherwise mysterious typological generalizations. His explanation for the lack of wh-movement in Japanese is the most striking example of this. From the mid-1980s onwards, the standard analysis of wh-movement involved the wh-phrase moving leftward to a position on the left edge of the clause called [Spec,CP]. Thus, a derivation of the English question What did John buy? would proceed roughly as follows:
[CP {Spec,CP position} John did buy what]
wh-movement →
[CP What did John buy]
The Japanese equivalent of this sentence is as follows (note the lack of wh-movement):
Japanese has an overt "question particle" (ka), which appears at the end of the sentence in questions. It is generally assumed that languages such as English have a "covert" (i.e. phonologically null) equivalent of this particle in the 'C' position of the clause — the position just to the right of [Spec,CP]. This particle is overtly realised in English by the movement of an auxiliary to C (in the case of the example above, by the movement of did to C). Why is it that this particle is on the left edge of the clause in English, but on the right edge in Japanese? Kayne suggests that in Japanese, the whole of the clause (apart from the question particle in C) has moved to the [Spec,CP] position. So, the structure for the Japanese example above is something like the following:
[CP [John-wa nani-o kaimashita] C ka
Now it is clear why Japanese does not have wh-movement — the [Spec,CP] position is already filled, so no wh-phrase can move to it. The relationship between surface word order and the possibility of wh-movement is seemingly obscure. A possible alternative to the antisymmetric explanation could be based on the difficulty of parsing languages with rightward movement.
Dynamic antisymmetry
Andrea Moro proposed Dynamic antisymmetry, a weak version of antisymmetry, which allows the generation of non-LCA compatible structures (points of symmetry) before the hierarchical structure is linearized at Phonetic Form (PF). The unwanted structures are then rescued by movement: deleting the phonetic content of the moved element neutralizes the linearization problem. Dynamic Antisymmetry aims at unifying a movement and phrase structure, which otherwise are independent properties.
Antisymmetry and ternary branching
Kayne proposed recasting the antisymmetry of natural language as a condition of "Merge", the operation which combines two elements into one. Kayne proposes that merging a head H and its complement C yields an ordered pair (rather than the standard symmetric set {H,C}). involves immediate temporal precedence (or immediate linear precedence) so that H immediately precedes (i-precedes) C. Kayne proposes furthermore that when a specifier S merges, it forms an ordered pair with the head directly, , or S i-precedes H. Invoking i-precedence prevents more than two elements from merging with H; only one element can i-precede H (the specifier), and H can i-precede only one element (the complement).
Kayne notes that is not mappable to a tree structure, since H would have two mothers, and that it has the consequence that and would seem to be constituents. He suggests that is replaced by , "with an ordered triple replacing the two ordered pairs and then being mappable to a ternary-branching tree" (pp. 17). Kayne goes on to say, "This would lead to seeing my [(1981)] arguments for binary branching to have two subcomponents, the first being the claim that syntax is n-ary branching with n having a single value, the second being that that value is 2. Mapping [ to ] would retain the first subcomponent and replace 2 by 3 in the second, arguably with no loss in restrictiveness".
Theoretical arguments
Antisymmetry theory rejects the head-directionality parameter as such: it claims that at an underlying level, all languages are head-initial. In fact, it argued that all languages have the underlying order Specifier-Head-Complement. Deviations from this order are accounted for by different syntactic movements applied by languages.
Kayne argues that a theory that allows both directionalities would state that languages are symmetrical, whereas in fact languages are found to be asymmetrical in many respects. Examples of linguistic asymmetries which may be cited in support of the theory (although they do not concern head direction) are:
Hanging topics appear at the start of sentences, as in "Henry – I've known that guy for a long time". They are not found at the end of sentences.
Number agreement is stronger when the noun phrase precedes the verb (Greenberg's Universal 33). Examples of this are found in English sentences such as There's books on the table, where the verb frequently fails to agree with the following plural noun, and in French and Italian compound tenses, where the past participle may agree with a preceding direct object but not with the following one.
Relative clauses that precede the noun (as in Chinese and Japanese) tend to differ from those that follow the noun: they more often lack complementizers (akin to English that) or relative pronouns and are more likely to be non-finite (this can be found, for example, in Quechuan languages.)
Other areas in which asymmetries are found, according to Kayne, include clitics and clitic dislocation, serial verb constructions, coordination, and forward and backward pronominalization.
In arguing for a universal underlying Head-Complement order, Kayne uses the concept of a probe-goal search (based on the Minimalist program). The idea of probes and goals in syntax is that a head acts as a probe and looks for a goal, namely its complement. Kayne proposes that the direction of the probe-goal search must share the direction of language parsing and production. Parsing and production proceed in a left-to-right direction: the beginning of the sentence is heard or spoken first, and the end of the sentence is heard or spoken last. This implies (according to the theory) an ordering whereby probe comes before the goal, i.e. head precedes complement.
Kayne's theory also addresses the position of the specifier of a phrase. He represents the relevant scheme as follows:
S H [c...S...]
The specifier, at first internal to the complement, is moved to the unoccupied position to the left of the head. In terms of merged pairs, this structure can also be represented as:
This process can be mapped onto X-bar syntactic trees as shown in the adjacent diagram.
Antisymmetry then leads to a universal Specifier-Head-Complement order. The varied ordering found in human languages are explained by syntactic movement away from this underlying base order. It has been pointed out, though, that in predominantly head-final languages such as Japanese and Basque, this would involve complex and massive leftward movement, which violates the ideal of grammatical simplicity. An example of the type of movement scheme that would need to be envisaged is provided by Tokizaki:
[CP C [IP ... [VP V [PP P [NP N [Genitive Affix Stem]]]]]]
[CP C [IP ... [VP V [PP P [NP N [Genitive Stem Affix]]]]]]
[CP C [IP ... [VP V [PP P [NP [Genitive Stem Affix] N]]]]]
[CP C [IP ... [VP V [PP [NP [Genitive Stem Affix] N] P]]]]
[CP C [IP ... [VP [PP [NP [Genitive Stem Affix] N] P] V]]]
[CP [IP ... [VP [PP [NP [Genitive Stem Affix] N] P] V]] C]
Here, at each phrasal level in turn, the head of the phrase moves from left to right position relative to its complement. The eventual result reflects the ordering of complex nested phrases found in languages such as Japanese.
An attempt to provide evidence for Kayne's scheme is made by Lin, who considered Standard Chinese sentences with the sentence-final particle le. This particle is taken to convey perfect aspectual meaning, and thus to be the head of an aspect phrase having the verb phrase as its complement. If phrases are always essentially head-initial, then a case like this must entail movement, since the particle comes after the verb phrase. It is proposed that there the complement moves into specifier position, which precedes the head.
As evidence for this, Lin considers wh-adverbials such as zenmeyang ("how?"). Based on prior work by James Huang, it is postulated that (a) adverbials of this type are subject to movement at logical form (LF) level (even though, in Chinese, they do not display wh-movement at surface level); and (b) movement is not possible from within a non-complement (Huang's Condition on Extraction Domain or CED). This would imply that zenmeyang could not appear in a verb phrase with sentence-final le, assuming the above analysis, since that verb phrase has moved into a non-complement (specifier) position, and thus further movement (such as that which zenmeyang is required to undergo at LF level) is not possible. Such a restriction on the occurrence of zenmeyang is indeed found:
Sentence (b), in which zenmeyang co-occurs with sentence-final le, is ungrammatical. Lin cites this and other related findings as evidence that the above analysis is correct, supporting the view that Chinese aspect phrases are deeply head-initial.
Surface true approach
According to the "surface true" viewpoint, analysis of head direction must take place at the level of surface derivations, or even the Phonetic Form, i.e. the order in which sentences are pronounced in natural speech. This rejects the idea of an underlying ordering that is then subject to movement. In a 2008 article, Marc Richards argued that a head parameter must only reside at PF, as it is unmaintainable in its original form as a structural parameter. In this approach the relative positions of head and complement that are found at this surface level, which show variation both between and within languages (see above), must be treated as the "true" orderings.
Existence of true head-final languages
Takita argues against the conclusion of Kayne's Antisymmetry Theory, which states that all languages are head-initial at an underlying level. He claims that a language such as Japanese is truly head-final since the mass movement required to take an underlying head-initial structure to the head-final ones actually found in such languages would violate other constraints. It is implied that such languages are likely following a head-final parameter value, as originally conceived. (For a head-initial/Antisymmetry analysis of Japanese, see Kayne.)
Takita's argument is based on Lin's analysis of Chinese. Since surface head-final structures are derived from underlying head-initial structures by moving the complements, further extraction from within the moved complement violates CED.
One of the examples of movement that Takita looks at is that of VP-fronting in Japanese. Grammatically, the sentence without VP-fronting, (a), and the sentence where the VP moves to the matrix clause, (b), do not significantly differ.
In (b), the fronted VP precedes the matrix subject, confirming that the VP is located in the matrix clause. If Japanese were head-initial, (b) should not be grammatical because it allows for the extraction of an element (VP2) from the moved complement (CP2).
Thus Takita shows that surface head-final structures in Japanese do not block movement, as they do in Chinese. He concludes that, because it does not block movement as shown in previous sections, Japanese is a genuinely head-final language, and not derived from an underlying, head-initial structure. These results imply that Universal Grammar involves binary head-directionality, and is not antisymmetric. Takita briefly applies the same tests to Turkish, another seemingly head-final language, and reports similar results.
References
Sources
Generative syntax
Grammar frameworks
Syntactic relationships
Syntax
Asymmetry | Antisymmetry | [
"Physics"
] | 3,779 | [
"Symmetry",
"Asymmetry"
] |
2,948,561 | https://en.wikipedia.org/wiki/Precordium | In anatomy, the precordium or praecordium is the portion of the body over the heart and lower chest.
Defined anatomically, it is the area of the anterior chest wall over the heart. It is therefore usually on the left side, except in conditions like dextrocardia, where the individual's heart is on the right side. In such a case, the precordium is on the right side as well.
The precordium is naturally a cardiac area of dullness. During examination of the chest, the percussion note will therefore be dull. In fact, this area only gives a resonant percussion note in hyperinflation, emphysema or tension pneumothorax.
Precordial chest pain can be an indication of a variety of illnesses, including costochondritis and viral pericarditis.
See also
Precordial thump
Precordial examination
Commotio cordis
Hyperdynamic precordium
Precordial catch syndrome
References
Anatomy | Precordium | [
"Biology"
] | 207 | [
"Anatomy"
] |
2,948,657 | https://en.wikipedia.org/wiki/Physical%20computing | Physical computing involves interactive systems that can sense and respond to the world around them. While this definition is broad enough to encompass systems such as smart automotive traffic control systems or factory automation processes, it is not commonly used to describe them. In a broader sense, physical computing is a creative framework for understanding human beings' relationship to the digital world. In practical use, the term most often describes handmade art, design or DIY hobby projects that use sensors and microcontrollers to translate analog input to a software system, and/or control electro-mechanical devices such as motors, servos, lighting or other hardware.
Physical computing intersects the range of activities often referred to in academia and industry as electrical engineering, mechatronics, robotics, computer science, and especially embedded development.
Examples
Physical computing is used in a wide variety of domains and applications.
Education
The advantage of physicality in education and playfulness has been reflected in diverse informal learning environments. The Exploratorium, a pioneer in inquiry based learning, developed some of the earliest interactive exhibitry involving computers, and continues to include more and more examples of physical computing and tangible interfaces as associated technologies progress.
Art
In the art world, projects that implement physical computing include the work of Scott Snibbe, Daniel Rozin, Rafael Lozano-Hemmer, Jonah Brucker-Cohen, and Camille Utterback.
Product design
Physical computing practices also exist in the product and interaction design sphere, where hand-built embedded systems are sometimes used to rapidly prototype new digital product concepts in a cost-efficient way. Firms such as IDEO and Teague are known to approach product design in this way.
Commercial applications
Commercial implementations range from consumer devices such as the Sony Eyetoy or games such as Dance Dance Revolution to more esoteric and pragmatic uses including machine vision utilized in the automation of quality inspection along a factory assembly line. Exergaming, such as Nintendo's Wii Fit, can be considered a form of physical computing. Other implementations of physical computing include voice recognition, which senses and interprets sound waves via microphones or other soundwave sensing devices, and computer vision, which applies algorithms to a rich stream of video data typically sensed by some form of camera. Haptic interfaces are also an example of physical computing, though in this case the computer is generating the physical stimulus as opposed to sensing it. Both motion capture and gesture recognition are fields that rely on computer vision to work their magic.
Scientific applications
Physical computing can also describe the fabrication and use of custom sensors or collectors for scientific experiments, though the term is rarely used to describe them as such. An example of physical computing modeling is the Illustris project, which attempts to precisely simulate the evolution of the universe from the Big Bang to the present day, 13.8 billion years later.
Methods
Prototyping plays an important role in Physical computing. Tools like the Wiring, Arduino and Fritzing as well as I-CubeX help designers and artists to quickly prototype their interactive concepts.
Further reading
References
External links
Arduino, a highly popular open source physical computing platform
Raspberry Pi, complete computer with GPIO's to interact with the world, huge community, many tutorials available. Many Linux distros available as well as Windows IoT and OS-less unikernel RTL's such as Ultibo Core.
BeagleBone, a complete Linux computer with GPIO's, but a little less flexible
FoxBoard (and others), yet another Linux computer with GPIO, but with little information
Arieh Robotics Project Junior]. A Windows 7 based Physical Computing PC built using Microsoft Robotics Developer Studio.
BluePD BlueSense. a physical computing platform by Blue Melon. This platform is visually programmable using the popular (open source) Pure Data system.
Daniel Rozin Artist Page, bitforms gallery, features images and video of Daniel Rozin's interactive installations and sculptures.
Dwengo, a PIC microcontroller based computing platform that comes with a Breadboard for easy prototyping.
EmbeddedLab, A research lab situated within the Department of Computer Aided Architecture Design at ETH Zürich.
Fritzing - from prototype to product: a software, which supports designers and artists to take the step from physical prototyping to actual product.
GP3, another popular choice that allows building physical systems with PCs and traditional languages (C, Basic, Java, etc.) or standalone using a point and click development tool.
Physical Computing, Interactive Telecommunications Program, New York University
Physical Computing by Dan O'Sullivan
Physical Computing, Tom Igoe's collection of resources, examples, and lecture notes for the physical computing courses at ITP.
Physical Computing, A path into electronics using an approach of “learning by making”, introducing electronic prototyping in a playful, non-technical way. (Yaniv Steiner, IDII)
Theremino, an open source modular system for interfacing transducers (sensors and actuators) via USB to PC, notebooks, netbooks, tablets and cellphones.
Applications of computer vision
User interfaces
Design
Digital art
Virtual reality
Computer systems | Physical computing | [
"Technology",
"Engineering"
] | 1,045 | [
"User interfaces",
"Computer engineering",
"Robotics engineering",
"Computer systems",
"Computer science",
"Interfaces",
"Design",
"Computers",
"Physical computing"
] |
2,948,757 | https://en.wikipedia.org/wiki/TUNEL%20assay | Terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) is a method for detecting DNA fragmentation by labeling the 3′- hydroxyl termini in the double-strand DNA breaks generated during apoptosis.
Method
TUNEL is a method for detecting apoptotic DNA fragmentation, widely used to identify and quantify apoptotic cells, or to detect excessive DNA breakage in individual cells. The assay relies on the use of terminal deoxynucleotidyl transferase (TdT), an enzyme that catalyzes attachment of deoxynucleotides, tagged with a fluorochrome or another marker, to 3'-hydroxyl termini of DNA double strand breaks. It may also label cells having DNA damage by other means than in the course of apoptosis.
History
The fluorochrome-based TUNEL assay applicable for flow cytometry, combining the detection of DNA strand breaks with respect to the cell cycle-phase position, was originally developed by Gorczyca et al. Concurrently, the avidin-peroxidase labeling assay applicable for light absorption microscope was described by Gavrieli et al. Since 1992 the TUNEL has become one of the main methods for detecting apoptotic programmed cell death. However, for years there has been a debate about its accuracy, due to problems in the original assay which caused necrotic cells to be inappropriately labeled as apoptotic. The method has subsequently been improved dramatically and if performed correctly should only identify cells in the last phase of apoptosis. New methods incorporate the dUTPs modified by fluorophores or haptens, including biotin or bromine, which can be detected directly in the case of a fluorescently-modified nucleotide (i.e., fluorescein-dUTP), or indirectly with streptavidin or antibodies, if biotin-dUTP or BrdUTP are used, respectively. The most sensitive of them is the method utilizing incorporation of BrdUTP by TdT followed by immunocytochemical detection of BrdU.
References
External links
Biochemistry detection reactions
Laboratory techniques
Programmed cell death | TUNEL assay | [
"Chemistry",
"Biology"
] | 461 | [
"Biochemistry detection reactions",
"Signal transduction",
"Biochemical reactions",
"Senescence",
"Microbiology techniques",
"nan",
"Programmed cell death"
] |
2,949,341 | https://en.wikipedia.org/wiki/Nemiscau%20Airport | Nemiscau Airport is located southeast of Nemaska, Quebec, Canada, along Route du Nord at km 294. It was built and is operated by Hydro-Québec to serve their large electrical substations of Nemiscau and Albanel. Air Creebec has scheduled flights to and from this airport at the discretion of Hydro-Québec.
The airport has one of the better gravel runways in the region. Lights are controlled by a ground radio operator, and thus the radio operator must be present for night operations (they typically go home while it is still daylight). Permission is required prior to landing private aircraft via telephone.
Airlines and destinations
References
External links
Transport Canada - Canadian Aerodromes
James Bay Project
Eeyou Istchee (territory)
Registered aerodromes in Nord-du-Québec | Nemiscau Airport | [
"Engineering"
] | 161 | [
"James Bay Project",
"Macro-engineering"
] |
2,949,396 | https://en.wikipedia.org/wiki/Woodpecker%20finch | The woodpecker finch (Camarhynchus pallidus) is a monomorphic species of bird in the Darwin's finch group of the tanager family Thraupidae, endemic to the Galapagos Islands. The diet of a woodpecker finch revolves mostly around invertebrates, but also encompasses a variety of seeds. Woodpecker finches, like many other species of birds, form breeding pairs and care for young until they have fledged. The most distinctive characteristic of woodpecker finches is their ability to use tools for foraging. This behaviour indicates that they have highly specialized cognitive abilities. Woodpecker finches have also shown the ability to learn new behaviours regarding tool use via social learning. Not all populations of woodpecker finches use tools equally often, as this is influenced by the environment in which they live.
Description
Woodpecker finches range in weight from 23g to 29g and are about 15 cm long. Although their tongues are quite short, they have a relatively long bill compared to other species of Darwin's finches.
Distribution
Woodpecker finches are native to the Galapagos Islands. They are commonly found on the islands of Isabela, Santa Cruz, San Cristobal, Fernandina, Santiago, and Penzón. They occupy all areas of the islands, from the most arid zones to more humid zones. However, the density of woodpecker finches is greater in the more humid zones than in the drier ones. Woodpecker finches are also found at a variety of altitudes, from sea level to higher inland elevations. They are not a migratory species and when they do fly, they only fly short distances.
Diet
Woodpecker finch diets mainly consist of arthropods found under dead logs and rocks. They only eat larvae, which are often located inside dead logs. Their habit of pecking at fallen logs is similar to a woodpecker's drumming on a tree trunk. Wood-boring beetle larvae are a staple of their diet. They also often feed on moths, caterpillars, and crickets. Another significant part of their diet includes meat from the small animals it kills, making woodpecker finches important hunters.
Foraging behaviour
One of the most distinguishable traits of Camarhynchus pallidus is its ability to use a twig, stick, or cactus spine as a tool. This behaviour earned it the nicknames tool-using finch and carpenter finch. The finch manipulates the tool to dislodge invertebrate prey, such as grubs, from crevices in trees. It has been hypothesized that due to the absence of woodpeckers, woodpecker finches filled a similar niche on the Galapagos Islands. Woodpeckers have strong bills for drilling and drumming on trees, as well as long sticky tongues for extracting food. On the isolated Galapagos islands, without competition from South American woodpecker species, the woodpecker finch was able to adapt, and evolve its tool-utilizing capability to compensate for its short tongue. The ability to use tools is a highly specialized cognitive ability as it involves the animal creating and recognizing a relationship between two foreign objects found in its environment.
Woodpecker finches are capable of using a variety of materials to construct the tools they use. They are capable of modifying the tools they find in order to maximize their efficiency. Scientists have observed finches shortening the length of sticks or cactus spines in order to make them more manageable for tool use. The same tool can be used multiple times and on different trees. Woodpecker finches may also try various sticks or spines at one site before finding one that can reach and extract the prey item. There is conflicting evidence of whether or not this behaviour was acquired through social learning, as juveniles have been observed using tools without previous contact with adults. In contrast, juvenile woodpecker finches have also been observed utilizing novel tools made from non-native plant species, such as blackberry bushes. After observing adult woodpecker finches prep barbed twigs and use them to obtain prey from crevices in trees, juvenile finches displayed the same behaviour with the novel tool. These observations contrasted previous studies to show that social learning may occur in wild woodpecker finch populations.
The frequency of tool use by woodpecker finches depends largely on whether they live in a more wet or dry environment. Woodpecker finches that live in more wet environments seldom use tools as prey is much more abundant. In contrast, they employ tool use much more when living in dry areas. During the dry season, woodpecker finches use tools while foraging to acquire up to 50% of their prey. The use of tools has allowed woodpecker finches to be able to obtain prey that they would otherwise be unable to reach with their short tongues. It is thought that this behaviour came to evolve due to the harshness of the dry and unstable environmental conditions of the Galapagos Islands.
Reproduction
There are no morphological differences between either sex in woodpecker finches, as they are monomorphic. Woodpecker finches mainly use moss, lichens, and grass as building materials for their nests. During the 2 week incubation period when females are sitting on the eggs, males linger nearby, often feeding the females. Female woodpecker finches typically lay around 2-3 eggs. Both males and females participate in the feeding of the chicks from the day they hatch until well after they have become independent. Woodpecker finch chicks will fledge around 2 weeks after hatching.
References
External links
woodpecker finch
Endemic birds of the Galápagos Islands
Tool-using animals
woodpecker finch
woodpecker finch
woodpecker finch
Taxobox binomials not recognized by IUCN | Woodpecker finch | [
"Biology"
] | 1,189 | [
"Ethology",
"Behavior",
"Tool-using animals"
] |
2,949,404 | https://en.wikipedia.org/wiki/Cocamidopropyl%20betaine | Cocamidopropyl betaine (CAPB) is a mixture of closely related organic compounds derived from coconut oil and dimethylaminopropylamine. CAPB is available as a viscous pale yellow solution and it is used as a surfactant in personal care products and animal husbandry. The name reflects that the major part of the molecule, the lauric acid group, is derived from coconut oil. Cocamidopropyl betaine to a significant degree has replaced cocamide DEA.
Production
Despite the name cocamidopropyl betaine, the molecule is not synthesized from betaine. Instead it is produced in a two-step manner, beginning with the reaction of dimethylaminopropylamine (DMAPA) with fatty acids from coconut or palm kernel oil (lauric acid, or its methyl ester, is the main constituent). The primary amine in DMAPA is more reactive than the tertiary amine, leading to its selective addition to form an amide. In the second step chloroacetic acid reacts with the remaining tertiary amine to form a quaternary ammonium center (a quaternization reaction).
CH3(CH2)10COOH + H2NCH2CH2CH2N(CH3)2 → CH3(CH2)10CONHCH2CH2CH2N(CH3)2
CH3(CH2)10CONHCH2CH2CH2N(CH3)2 + ClCH2CO2H + NaOH → CH3(CH2)10CONHCH2CH2CH2N+(CH3)2CH2CO2− + NaCl + H2O
Chemistry
CAPB is a fatty acid amide that contains a long hydrocarbon chain at one end and a polar group at the other. This allows CAPB to act as a surfactant and as a detergent. It is a zwitterion, consisting of both a quaternary ammonium cation and a carboxylate.
Specifications and properties
Cocamidopropyl betaine is used as a foam booster in shampoos. It is a medium-strength surfactant also used in bath products like hand soaps. It is also used in cosmetics as an emulsifying agent and thickener, and to reduce the irritation that purely ionic surfactants would cause. It also serves as an antistatic agent in hair conditioners, which most often does not irritate skin or mucous membranes. However, some studies indicate it is an allergen.
CAPB is also used as a co-surfactant with Sodium dodecyl sulfate for promoting the formation of gas hydrates. CAPB, as an additive, helps to scale up the gas hydrates' formation process.
CAPB is obtained as an aqueous solution in concentrations of about 30%.
Typical impurities of leading manufacturers today:
Sodium monochloroacetate < 5 ppm
Amidoamine (AA) < 0.3%
Dimethylaminopropylamine (DMAPA) < 15 ppm
Glycerol < 3%
The impurities AA and DMAPA are most critical, as they have been shown to be responsible for skin sensitization reactions. These by-products can be avoided by a moderate excess chloroacetate and the exact adjustment of pH value during betainization reaction accompanied by regular analytical control.
Safety
CAPB has been claimed to cause allergic reactions in some users, but a controlled pilot study has found that these cases may represent irritant reactions rather than true allergic reactions. Furthermore, results of human studies have shown that CAPB has a low sensitizing potential if impurities with amidoamine (AA) and dimethylaminopropylamine (DMAPA) are low and tightly controlled. Other studies have concluded that most apparent allergic reactions to CAPB are more likely due to amidoamine. Cocamidopropyl betaine was voted 2004 Allergen of the Year by the American Contact Dermatitis Society.
See also
Cocamidopropyl hydroxysultaine
References
Zwitterionic surfactants
Antiseptics
Cosmetics chemicals
Antistatic agents
Quaternary ammonium compounds
Fatty acid amides | Cocamidopropyl betaine | [
"Chemistry"
] | 888 | [
"Process chemicals",
"Antistatic agents"
] |
2,949,409 | https://en.wikipedia.org/wiki/Mineral-insulated%20copper-clad%20cable | Mineral-insulated copper-clad cable is a variety of electrical cable made from copper conductors inside a copper sheath, insulated by inorganic magnesium oxide powder. The name is often abbreviated to MICC or MI cable, and colloquially known as pyro (because the original manufacturer and vendor for this product in the UK was a company called Pyrotenax). A similar product sheathed with metals other than copper is called mineral-insulated metal-sheathed (MIMS) cable.
Construction
MI cable is made by placing copper rods inside a circular copper tube and filling the spaces with dry magnesium oxide powder. The overall assembly is then pressed between rollers to reduce its diameter (and increase its length). Up to seven conductors are often found in an MI cable, with up to 19 available from some manufacturers.
Since MI cables use no organic material as insulation (except at the ends), they are more resistant to fires than plastic-insulated cables. MI cables are used in critical fire protection applications such as alarm circuits, fire pumps, and smoke control systems. In process industries handling flammable fluids MI cable is used where small fires would otherwise cause damage to control or power cables. MI cable is also highly resistant to ionizing radiation and so finds applications in instrumentation for nuclear reactors and nuclear physics apparatus.
MI cables may be covered with a plastic sheath, coloured for identification purposes. The plastic sheath also provides additional corrosion protection for the copper sheath.
The metal tube shields the conductors from electromagnetic interference. The metal sheath also physically protects the conductors, most importantly from accidental contact with other energised conductors.
History
The first patent for MI cable was issued to the Swiss inventor Arnold Francois Borel in 1896. Initially the insulating mineral was described in the patent application as pulverised glass, silicious stones, or asbestos, in powdered form. Much development ensued by the French company Société Alsacienne de Construction Mécanique. Commercial production began in 1932 and much mineral-insulated cable was used on ships such as the Normandie and oil tankers, and in such critical applications as the Louvre museum. In 1937 a British company Pyrotenax, having purchased patent rights to the product from the French company, began production. During the Second World War much of the company's product was used in military equipment. The company floated on the stock exchange in 1954.
Around 1947, the British Cable Makers' Association investigated the option of manufacturing a mineral-insulated cable that would compete with the Pyrotenax product. The manufacturers of the products "Bicalmin" and "Glomin" eventually merged with the Pyrotenax company.
The Pyrotenax company introduced an aluminum sheathed version of its product in 1964. MI cable is now manufactured in several countries. Pyrotenax is now a brand name under nVent (formerly known as Pentair Thermal Management).
Purpose and use
MI cables are used for power and control circuits of critical equipment, such as the following examples:
Nuclear reactors
Exposure to dangerous gasses
Air pressurisation systems for stairwells to enable building egress during a fire
Hospital operating rooms
Fire alarm systems
Emergency power systems
Emergency lighting systems
Temperature measurement devices; RTDs and thermocouples.
Critical process valves in the petrochemical industry
Public buildings such as theatres, cinemas, hotels
Transport hubs (railway stations, airports etc.)
Mains supply cables within residential apartment blocks
Tunnels and mines
Electrical equipment in hazardous areas where flammable gases may be present e.g. oil refineries, petrol stations
Areas where corrosive chemicals may be present e.g. factories
Building plant rooms
Hot areas e.g. power stations, foundries, and close to or even inside industrial furnaces, kilns and ovens
MI cable fulfills the passive fire protection called circuit integrity, which is intended to provide operability of critical electrical circuits during a fire. It is subject to strict listing and approval use and compliance
Heating cable
A similar-appearing product is mineral-insulated trace heating cable, in which the conductors are made of a high-resistance alloy. A heating cable is used to protect pipes from freezing or to maintain the temperature of process piping and vessels. An MI resistance heating cable may not be repairable if damaged. Most electric stove and oven heating elements are constructed in a similar manner.
Typical specifications
Properties and comparison with other wiring systems
The construction of MI cable makes it mechanically robust and resistant to impact. Copper sheathing is waterproof and resistant to ultraviolet light and many corrosive elements. MI cable is approved for use in areas with hazardous concentrations of flammable substances, being unlikely to initiate an explosion even during circuit fault conditions. MI cable is smokeless, non-toxic, and will not support combustion. The cable meets and exceeds BS 5839-1, making it fire-rated surpassing 950°c for over three hours with simultaneous mechanical stress and water spray as well without failure.
MI cable is primarily used for high-temperature environments or safety-critical signal and power systems; however, it can additionally be used within a tenanted area, carrying electricity supplied and billed to the landlord. For example, for a communal extract system or antenna booster, it provides a supply cable that cannot easily be 'tapped' into to obtain free energy.
The finished cable assembly can be bent to follow the shapes of buildings or bent around obstacles, allowing for a neat appearance when exposed.
Since the inorganic insulation does not degrade with (moderate) heating, the finished cable assembly can be allowed to rise to higher temperatures than plastic-insulated cables; the limits to temperature rise may be only due to possible contact of the sheath with people or structures or the physical melting point of copper. This may also allow a smaller cross-section cable to be used in particular applications.
An additional advantage of Mi cable is the ability to use the copper shield as a neutral or earth in particular situations
Due to oxidation, the copper cladding darkens with age. However, where MICC cables with a bare copper sheath are installed in damp locations, particularly where lime mortar has been used, the water and lime combine to create an electrolytic action with the bare copper. Similarly, electrolytic action may also be caused by installing bare-sheath MICC cables on new oak. The reaction causes the copper to be eaten away, making a hole in the sheath of the cable and letting in water, causing a breakdown of the insulation and short circuits. The copper sheath material is typically resistant to most chemicals but can be severely damaged by ammonia-bearing compounds and urine. A pinhole in the copper sheathing will allow moisture into the insulation and cause eventual failure of the circuit. A PVC over-jacket or sheaths of other metals may be required where such chemical damage is expected. When MI cable is embedded in concrete, as in floor heating cable, it is susceptible to physical damage by concrete workers working the concrete into the pour. If the coating is damaged, pinholes in the copper jacket may develop, causing premature failure of the system.
While the length of the MI cable is very tough, at some point, each run of cabling terminates at a splice or within electrical equipment. These terminations are vulnerable to fire, moisture, and mechanical impact. MICC is not suitable for use where it will be subject to vibration or flexing, as in connections to heavy or movable machinery. Vibration can cause cracking in the cladding and cores, leading to failure.
During installation MI cable must not be bent repeatedly, as this will cause work hardening and cracks in the cladding and cores. A minimum bend radius must be observed, and the cable must be supported at regular intervals. The magnesium oxide insulation is hygroscopic, so MICC cable must be protected from moisture until it has been terminated. Termination requires stripping back the copper cladding and attaching a compression gland fitting. Individual conductors are insulated with plastic sleeves. A sealing tape, insulating putty, or an epoxy resin is then poured into the compression gland fitting to provide a watertight seal. If a termination is faulty due to workmanship or damage, then the magnesium oxide will absorb moisture and lose its insulating properties. Installation of MI cable takes more time than installation of a PVC-sheathed armoured cable of the same conductor size. Installation of MICC is therefore a costly task.
MI cable is only manufactured with ratings up to 1000 volts.
The magnesium oxide insulation has a high affinity for moisture. Moisture introduced into the cable can cause electrical leakage from the internal conductors to the metal sheath. Moisture absorbed at a cut end of the cable may be driven off by heating the cable. If the MI cable jacket has been damaged, the magnesium oxide will wick moisture into the cable, and it will lose its insulating properties, causing shorts to the copper cladding and thence to earth. It is often necessary to remove of the MI cable and splice in a new section to accomplish the repair. Depending on the size and number of conductors, a single termination can be a large undertaking to repair.
Alternatives
Circuit integrity for conventional plastic-insulated cables requires additional measures to obtain a fire-resistance rating or to lower the flammability and smoke contributions to a minimum degree acceptable for certain types of construction. Sprayed-on coatings or flexible wraps cover the plastic insulation to protect it from flame and reduce its flame-spreading ability. However, since these coatings reduce the heat dissipation of the cables, often they must be rated for less current after application of fire-resistant coatings. This is called current capacity derating. It can be tested through the use of IEEE 848 Standard Procedure for the Determination of the Ampacity Derating of Fire-Protected Cables.
See also
Listing and approval use and compliance
Passive fire protection
Circuit integrity
Fireproofing
Cable tray
Copper wire and cable
References
Power cables
Passive fire protection
Electrical wiring | Mineral-insulated copper-clad cable | [
"Physics",
"Engineering"
] | 2,024 | [
"Electrical systems",
"Building engineering",
"Physical systems",
"Electrical engineering",
"Electrical wiring"
] |
2,949,555 | https://en.wikipedia.org/wiki/XENON | The XENON dark matter research project, operated at the Italian Gran Sasso National Laboratory, is a deep underground detector facility featuring increasingly ambitious experiments aiming to detect hypothetical dark matter particles. The experiments aim to detect particles in the form of weakly interacting massive particles (WIMPs) by looking for rare nuclear recoil interactions in a liquid xenon target chamber. The current detector consists of a dual phase time projection chamber (TPC).
The experiment detects scintillation and ionization signals produced when external particles interact in the liquid xenon volume, to search for an excess of nuclear recoil events against known backgrounds. The detection of such a signal would provide the first direct experimental evidence for dark matter candidate particles. The collaboration is currently led by Italian professor of physics Elena Aprile from Columbia University.
Detector principle
The XENON experiment operates a dual phase time projection chamber (TPC), which utilizes a liquid xenon target with a gaseous phase on top. Two arrays of photomultiplier tubes (PMTs), one at the top of the detector in the gaseous phase (GXe), and one at the bottom of the liquid layer (LXe), detect scintillation and electroluminescence light produced when charged particles interact in the detector. Electric fields are applied across both the liquid and gaseous phase of the detector. The electric field in the gaseous phase has to be sufficiently large to extract electrons from the liquid phase.
Particle interactions in the liquid target produce scintillation and ionization. The prompt scintillation light produces 178 nm ultraviolet photons. This signal is detected by the PMTs, and is referred to as the S1 signal. The applied electric field prevents recombination of all the electrons produced from a charged particle interaction in the TPC. These electrons are drifted to the top of the liquid phase by the electric field. The ionization is then extracted into the gas phase by the stronger electric field in the gaseous phase. The electric field accelerates the electrons to the point that it creates a proportional scintillation signal that is also collected by the PMTs, and is referred to as the S2 signal. This technique has proved sensitive enough to detect S2 signals generated from single electrons.
The detector allows for a full 3-D position determination of the particle interaction. Electrons in liquid xenon have a uniform drift velocity. This allows the interaction depth of the event to be determined by measuring the time delay between the S1 and S2 signal. The position of the event in the x-y plane can be determined by looking at the number of photons seen by each of the individual PMTs. The full 3-D position allows for the fiducialization of the detector, in which a low-background region is defined in the inner volume of the TPC. This fiducial volume has a greatly reduced rate of background events as compared to regions of the detector at the edge of the TPC, due to the self-shielding properties of liquid xenon. This allows for a much higher sensitivity when searching for very rare events.
Charged particles moving through the detector are expected to either interact with the electrons of the xenon atoms producing electronic recoils, or with the nucleus, producing nuclear recoils. For a given amount of energy deposited by a particle interaction in the detector, the ratio of S2/S1 can be used as a discrimination parameter to distinguish electronic and nuclear recoil events. This ratio is expected to be greater for electronic recoils than for nuclear recoils. In this way backgrounds from electronic recoils can be suppressed by more than 99%, while simultaneously retaining 50% of the nuclear recoil events.
XENON10
The XENON10 experiment was installed at the underground Gran Sasso laboratory in Italy during March 2006. The underground location of the laboratory provides 3100 m of water-equivalent shielding. The detector was placed within a shield to further reduce the background rate in the TPC. XENON10 was intended as a prototype detector, to prove the efficacy of the XENON design, as well as verify the achievable threshold, background rejection power and sensitivity. The XENON10 detector contained 15 kg of liquid xenon. The sensitive volume of the TPC measures 20 cm in diameter and 15 cm in height.
An analysis of 59 live days of data, taken between October 2006 and February 2007, produced no WIMP signatures. The number of events observed in the WIMP search region is statistically consistent with the expected number of events from electronic recoil backgrounds. This result excluded some of the available parameter space in minimal Supersymmetric models, by placing limits on spin independent WIMP-nucleon cross sections down to below for a WIMP mass.
Due to nearly half of natural xenon having odd spin states (129Xe has an abundance of 26% and spin-1/2; 131Xe has an abundance of 21% and spin-3/2), the XENON detectors can also be used to provide limits on spin dependent WIMP-nucleon cross sections for coupling of the dark matter candidate particle to both neutrons and protons. XENON10 set the world's most stringent restrictions on pure neutron coupling.
XENON100
The second phase detector, XENON100, contains 165 kg of liquid xenon, with 62 kg in the target region and the remaining xenon in an active veto. The TPC of the detector has a diameter of 30 cm and a height of 30 cm. As WIMP interactions are expected to be extremely rare events, a thorough campaign was launched during the construction and commissioning phase of XENON100 to screen all parts of the detector for radioactivity. The screening was performed using high-purity Germanium detectors. In a few cases mass spectrometry was performed on low mass plastic samples. In doing so the design goal of <10−2 events/kg/day/keV was reached, realising the world's lowest background rate dark matter detector.
The detector was installed at the Gran Sasso National Laboratory in 2008 in the same shield as the XENON10 detector, and has conducted several science runs. In each science run, no dark matter signal was observed above the expected background, leading to the most stringent limit on the spin independent WIMP-nucleon cross section in 2012, with a minimum at for a WIMP mass. These results constrain interpretations of signals in other experiments as dark matter interactions, and rule out exotic models such as inelastic dark matter, which would resolve this discrepancy. XENON100 has also provided improved limits on the spin dependent WIMP-nucleon cross section. An axion result was published in 2014, setting a new best axion limit.
XENON100 operated the then-lowest background experiment, for dark matter searches, with a background of 50 (1 =10−3 events/kg/day/keV).
XENON1T
Construction of the next phase, XENON1T, started in Hall B of the Gran Sasso National Laboratory in 2014. The detector contains 3.2 tons of ultra radio-pure liquid xenon, and has a fiducial volume of about 2 tons. The detector is housed in a 10 m water tank that serves as a muon veto. The TPC is 1 m in diameter and 1 m in height.
The detector project team, called the XENON Collaboration, is composed of 135 investigators across 22 institutions from Europe, the Middle East, and the United States.
The first results from XENON1T were released by the XENON collaboration on May 18, 2017, based on 34 days of data-taking between November 2016 and January 2017. While no WIMPs or dark matter candidate signals were officially detected, the team did announce a record low reduction in the background radioactivity levels being picked up by XENON1T. The exclusion limits exceeded the previous best limits set by the LUX experiment, with an exclusion of cross sections larger than for WIMP masses of . Because some signals that the detector receives might be due to neutrons, reducing the radioactivity increases the sensitivity to WIMPs.
In September 2018 the XENON1T experiment published its results from 278.8 days of collected data. A new record limit for WIMP-nucleon spin-independent elastic interactions was set, with a minimum of at a WIMP mass of .
In April 2019, based on measurements performed with the XENON1T detector, the XENON Collaboration reported in Nature the first direct observation of two-neutrino double electron capture in xenon-124 nuclei. The measured half-life of this process, which is several orders of magnitude larger than the age of the Universe, demonstrates the capabilities of xenon-based detectors to search for rare events and showcases the broad physics reach of even larger next-generation experiments. This measurement represents a first step in the search for the neutrinoless double electron capture process, the detection of which would provide insight into the nature of the neutrino and allow to determine its absolute mass.
As of 2019, the XENON1T experiment has stopped data-taking to allow for construction of the next phase, XENONnT. The XENON1T detector operated 2016–2018, with the detector operations ending at the end of 2018.
In June 2020, the XENON1T collaboration reported an excess of electron recoils: 285 events, 53 more than the expected 232 with a statistical significance of 3.5σ. Three explanations were considered: existence of to-date-hypothetical solar axions, a surprisingly large magnetic moment for neutrinos, and tritium contamination in the detector. Multiple other explanations were given later by others groups and in 2021 an interpretation of the results not as dark matter particles but of as dark energy particles candidates called chameleons has also been discussed. In July 2022 a new analysis by XENONnT discarded the excess.
XENONnT
XENONnT is an upgrade of the XENON1T experiment underground at LNGS. Its systems will contain a total xenon mass of more than 8 tonnes. Apart from a larger xenon target in its time projection chamber the upgraded experiment will feature new components to further reduce or tag radiation that otherwise would constitute background to its measurements. It is designed to reach a sensitivity (in a small part of the mass-range probed) where neutrinos become a significant background. As of 2019, the upgrade was on-going and first light was expected in 2020.
The XENONnT detector was under construction in March 2020. Even with the problems posed by COVID-19, the project was able to finish construction and move forwards into commissioning phase by mid 2020. Full detector operations commenced in late 2020. In September 2021, XENONnT was taking science data for its first science run, which was ongoing at the time.
On 28 July 2023 the XENONnT published the first results of its search for WIMPs, excluding cross sections above at 28 GeV with 90% confidence level, jointly on the same date the LZ experiment published its first results too excluding cross sections above at 36 GeV with 90% confidence level.
References
Further reading
External links
The XENON Experiment
XENON home page at the University of Chicago
XENON home page at Columbia University
XENON home page at the University of Zurich
XENON home page at Rice University
XENON home page at Brown University
Katsuhi Arisaka, XENON at University of California, Los Angeles
Dark matter limit plotter with the latest results from XENON and other experiments
Enlightening the dark, CERN Courier, Sep 27, 2013
Experiments for dark matter search | XENON | [
"Physics"
] | 2,429 | [
"Dark matter",
"Experiments for dark matter search",
"Unsolved problems in physics"
] |
2,949,599 | https://en.wikipedia.org/wiki/Indiana%20Asteroid%20Program | The Indiana Asteroid Program was a photographic astronomical survey of asteroids during 1949–1967, at the U.S. Goethe Link Observatory near Brooklyn, Indiana. The program was initiated by Frank K. Edmondson of Indiana University using a 10-inch f/6.5 Cooke triplet astrographic camera.
Its objectives included recovering asteroids that were far from their predicted positions, making new orbital calculations or revising old ones, deriving magnitudes accurate to about 0.1 mag, and training students.
When the observatory's 36-inch (0.91-meter) reflecting telescope proved unsuitable for searching for asteroids, postdoctoral fellow James Cuffey arranged the permanent loan of a 10-inch (25-centimeter) lens from the University of Cincinnati. Mounted in a shed near the main observatory, the instrument using the borrowed lens was responsible for all of the program's discoveries.
By 1958, the program had produced 3,500 photographic plates showing 12,000 asteroid images and had published about 2,000 accurate positions in the Minor Planet Circular. When the program ended in 1967, it had discovered a total of 119 asteroids. The program's highest numbered discovery, 30718 Records, made in 1955, was not named until November 2007 ().
The program ended when the lights of the nearby city of Indianapolis became too bright to permit the long exposures required for the photographic plates. The program's nearly 7,000 photographic plates are now archived at Lowell Observatory.
List of discovered minor planets
The Indiana Asteroid Program has discovered 119 asteroids during 1949–1966. The Minor Planet Center officially credits these discoveries to "Indiana University" rather than to the program itself.
References
Astrometry
Astronomical discoveries by institution
Indiana University | Indiana Asteroid Program | [
"Astronomy"
] | 343 | [
"Astrometry",
"Astronomical sub-disciplines"
] |
2,949,636 | https://en.wikipedia.org/wiki/Parotid%20duct | The parotid duct or Stensen duct is a salivary duct. It is the route that saliva takes from the major salivary gland, the parotid gland, into the mouth. It opens into the mouth opposite the second upper molar tooth.
Structure
The parotid duct is formed when several interlobular ducts, the largest ducts inside the parotid gland, join. It emerges from the parotid gland. It runs forward along the lateral side of the masseter muscle for around 7 cm. In this course, the duct is surrounded by the buccal fat pad. It takes a steep turn at the border of the masseter and passes through the buccinator muscle, opening into the vestibule of the mouth, the region of the mouth between the cheek and the gums, at the parotid papilla, which lies across the second maxillary (upper) molar tooth. The exit of the parotid ducts can be felt as small bumps (papillae) on both sides of the mouth that usually positioned next to the maxillary second molar.
The buccinator acts as a valve that prevents air forcing into the duct, which would cause pneumoparotitis.
Relations
The parotid duct lies close to the buccal branch of the facial nerve (VII). It is also close to the transverse facial artery.
Running along with the duct superiorly is the transverse facial artery, and the upper buccal nerve. The lower buccal nerve runs inferiorly along the duct.
Clinical significance
Blockage, whether caused by salivary duct stones or external compression, may cause pain and swelling of the parotid gland (parotitis).
Koplik's spots which are pathognomonic of measles are found near the opening of the parotid duct.
The parotid duct may be cannulated by inserting a tube through the internal orifice in the mouth. Dye may be injected to allow for imaging of the parotid duct.
History
The parotid duct is named after Nicolas Steno (1638–1686), also known as Niels Stensen, a Danish anatomist (albeit best known as a geologist) credited with its detailed description in 1660. This is where the alternative name "Stensen duct" originates from.
Additional images
See also
Parotid gland
Parotitis
References
Further reading
External links
Diagram at MSU
- Parotid duct injuries
Glands of mouth
Saliva | Parotid duct | [
"Biology"
] | 516 | [
"Saliva",
"Excretion"
] |
2,949,850 | https://en.wikipedia.org/wiki/Logical%20access%20control | In computers, logical access controls are tools and protocols used for identification, authentication, authorization, and accountability in computer information systems. Logical access is often needed for remote access of hardware and is often contrasted with the term "physical access", which refers to interactions (such as a lock and key) with hardware in the physical environment, where equipment is stored and used.
Models
Logical access controls enforce access control measures for systems, programs, processes, and information. The controls can be embedded within operating systems, applications, add-on security packages, or database and telecommunication management systems.
The line between logical access and physical access can be blurred when physical access is controlled by software. For example, entry to a room may be controlled by a chip and PIN card and an electronic lock controlled by software. Only those in possession of an appropriate card, with an appropriate security level and with knowledge of the PIN are permitted entry to the room. On swiping the card into a card reader and entering the correct PIN code.
Logical controls, also called logical access controls and technical controls, protect data and the systems, networks, and environments that protect them. In order to authenticate, authorize, or maintain accountability a variety of methodologies are used such as password protocols, devices coupled with protocols and software, encryption, firewalls, or other systems that can detect intruders and maintain security, reduce vulnerabilities and protect the data and systems from threats.
Businesses, organizations and other entities use a wide spectrum of logical access controls to protect hardware from unauthorized remote access. These can include sophisticated password programs, advanced biometric security features, or any other setups that effectively identify and screen users at any administrative level.
The particular logical access controls used in a given facility and hardware infrastructure partially depend on the nature of the entity that owns and administrates the hardware setup. Government logical access security is often different from business logical access security, where federal agencies may have specific guidelines for controlling logical access. Users may be required to hold security clearances or go through other screening procedures that complement secure password or biometric functions. This is all part of protecting the data kept on a specific hardware setup.
Militaries and governments use logical access biometrics to protect their large and powerful networks and systems which require very high levels of security. It is essential for the large networks of police forces and militaries where it is used not only to gain access but also in six main essential applications. Without logical access control security systems highly confidential information would be at risk of exposure.
There is a wide range of biometric security devices and software available for different levels of security needs. There are very large complex biometric systems for large networks that require absolute airtight security and there are less expensive systems for use in office buildings and smaller institutions.
Notes
References
Andress, Jason. (2011). ″The Basics of Information Security.″
Cory Janssen, Logical Access, Techopedia, retrieved at 3:15 a.m. on August 12, 2014
findBIOMETRICS, Logical Access Control Biometrics, retrieved at 3:25 a.m. on August 12, 2014
External links
RSA Intelligence Driven Security, EMC Corporation
Computer access control | Logical access control | [
"Engineering"
] | 651 | [
"Cybersecurity engineering",
"Computer access control"
] |
2,950,023 | https://en.wikipedia.org/wiki/BlackDog | The BlackDog is a pocket-sized, self-contained computer with a built-in biometric fingerprint reader which was developed in 2005 by Realm Systems, which is plugged into and powered by the USB port of a host computer using its peripheral devices for input and output.
It is a mobile personal server which allows a user to use Linux, ones applications and data on any computer with a USB port. The host machine's monitor, keyboard, mouse, and Internet connection are used by the BlackDog for the duration of the session. As the system is self-contained and isolated from the host, requiring no additional installation, it is possible to make use of untrusted computers, yet using a secure system. Various hardware iterations exist, and the original developer Realm Systems closed down in 2007, being picked up by the successor Inaura, Inc.
Hardware history
Original Black Dog & Project BlackDog Skills Contest
Identified as the BlackDog, the Project BlackDog, or Original BlackDog, the first hardware version was touted as "unlike any other mobile computing device, BlackDog contains its own processor, memory and storage, and is completely powered by the USB port of a host computer with no external power adapter required."
It was created in conjunction with Realm System's Project BlackDog Skills Contest (announced on Oct 27, 2005) which was supposed to raise interest, and create a developer community surrounding the product. The BlackDog was publicly available for purchase from the Project BlackDog website in September 2005 for those who wished to enter the contest or to experiment with the platform. Production ended in mid January 2006 when the contest closed.
On 7 February 2006, the winners of the contest were announced for the categories: Security (Michael Chenetz), Entertainment (Michael King), Productivity (Terry Bayne) and "Dogpile" (Paul Chandler). On Feb 15, 2006, during the Open Source Business Conference, San Francisco, Terry Bayne was announced the grand prize winner of the contest and received US$50,000 for his creation "Kibble," a tool for building integration solutions between the host PC and the BlackDog device using a SOAP-based RPC mechanism to send arbitrary LUA code to be executed on the host PC from the BlackDog. At this conference, the second iteration of the BlackDog, the K9 was publicly announced.
K9
Identified as the K9 Ultra-Mobile Server, or K9, this version was announced at the Open Source Business Conference in February 2006 with expected availability in the third quarter of 2006. However, company turbulences (see Company History below) prevented the K9 from being sold until early 2009 by Inaura, Inc.
Promotional literature shows the form factor to be the same as the intermediate iD3 prototype a very thin chrome model resembling an iPod Nano, but all black with a rubberized exterior. Before Realm Systems shut down, there were working prototypes of the K9, the hardware design seemed to be finished, and the software was functional.
In terms of hardware, it differed from the Original BlackDog in these aspects:
128 MB RAM
1GB Flash NAND memory
60-pin Hiroshi connector replaces MMC slot (intended for a USB connection cable, as well as custom cables to support additional peripherals)
OLED display replaces indicator LED of first version (1.1 inch display, 96x64 resolution, 4-bit grayscale Black and White)
Dimensions, H×W×L:
iD3
The iD3 was a variant of the K9, using the same hardware specifications, intended for corporate use with a matching management router/server identified as the iD1200. It was announced as being part of the iDentity product series and was, for instance, showcased on the Embedded Systems Conference in San Jose, CA (April 3–7, 2006). The final Realm Systems iD3 form factor resembled a small Nokia cellphone.
Software
The software was originally based on Debian until 2008. The project switched to using Olmec Linux.
Debian Linux (pre-2008)
When plugged into a USB port of a Windows XP machine, the BlackDog initially presented itself to the host as a virtual CD-ROM drive. Via an autorun application the BlackDog then automatically launched the X Window system for Windows Xming and a software NAT router. Once those applications were running, the virtual USB CD-ROM drive disconnected, and the USB presented itself as a virtual Ethernet adapter, enabling network access. Without requiring any installations or user interactions, the user could access the contained applications and data from any Windows computer. With further configuration steps, it was possible to also run the BlackDog on Linux and Mac computers.
A short Engadget review stated that "it runs Firefox fine, and should be great for taking your own browser, e-mail, and chat clients for use wherever you are, though that will probably be about all this little 400MHz guy can handle."
The first software version was based on Debian Sid running a 2.6.10 Kernel. It contained some sample default applications such as xterm, XBlast, and XGalaga and allowed installation of the Firefox web browser, an email client and other additional software available through official and community APT repositories hosted by the project.
It was attempted to stimulate the creation of further applications and use cases for the BlackDog by building up a community. The project and discussion infrastructure, termed DogPound, used an installation of the project hosting software (SourceForge). A SDK with a QEMU emulator environment for Windows XP, Linux and Mac OS X was released to facilitate the creation and porting of applications to the BlackDog system.
Although most of the BlackDog software was free software, the device contained some proprietary technology and intellectual property developed by Realm Systems Inc., which was later transferred to Echo Identity Systems and finally ended up belonging to Inaura Corporation.
The official repository for the project disappeared in mid-February 2007 due to Realm Systems Inc. closing and was reactivated by its successor Inaura Inc. as of late-June 2007. (There does not seem to be a repository in Nov. 2011). Until sometime in 2009, Michael King (winner of the Entertainment category of the contest) maintained an independent backup of the official repositories and discussion groups as well as repositories for other developers at the now-defunct Saint Louis, MO based ArchLUG website.
The official website for the project www.projectblackdog.org still appears to be up as of December 2013, but has been defaced by several "quick cash" money lenders that have compromised the site via the WordPress content management system it uses. There does not appear to be any other original content remaining other than the homepage and the advertisements for the money lending sites.
Olmec Linux (2008 onwards)
Starting in late 2007, Olmec Linux was ported to the Blackdog and K9 devices., which is a Debian-derived Linux distribution geared for small embedded platforms such as the gumstix. When sold as part of the Inaura Inc. product offering, BlackDog/K9 was using the Olmec-based version.
Realm Systems Corporate history
Realm Systems Inc. was founded in 2002, based in Salt Lake City, Utah, raising $8.5 million led by GMG Capital in its Round A, with CEO Rick White, describing itself as "[providing] a next generation Mobile Enterprise Platform that simplifies the delivery of applications and services to end-users across the distributed enterprise."
During 2006, Realm Systems focuses on their iD3 line of products and the K9 product launch was put off indefinitely. In January 2007, two then-unidentified groups containing former Realm Systems employees and investors attempted, independently, to license or move the K9 hardware and software to a separate company to continue development and production, due to the dissolution of Realm Systems and continued developer community interest in the concept, as well as rumored successful pilot programs.
One of Realm Systems' backers then posted a public foreclosure notice, and in a court-supervised foreclosure hearing a number of investors bid on the company's assets in a closed bid. As a result, all of Realm's assets, including the iD3 and K9-series hardware, their operating systems, and the enterprise management router code, were bought out by a new firm, Echo Identity Systems, which was registered as a Salt Lake City company on February 1, 2007. This company claimed to be continuing the enterprise product line, and re-used nearly all of the old Realm Systems website layout and graphics. No mention of the K9 product line was made anywhere on the Echo Identity Systems website. The former Realm Systems website redirected iD3 and BlackDog customers to a transitional support website informing about the asset change, Realm Systems closing, and that product support would be done by Echo Identity Systems (though no explanation as to the extent of the support is provided).
It appears that assets were soon bought back from Echo Identity Systems to a group of investors backed by former Realm Systems employees and investors. Based on unconfirmed community reports (September 2007), it appears new developer prototypes of the K9 have been seeded to the Project BlackDog contest winners. In November 2007, the new owners emerged as Inaura Inc, with CEO and president Peter Bookman, who is one of the original co-founders of Realm Systems. CFO of the new company is Rodney Rasmussen, who had both registered Echo Identity Systems and Inaura as a Utah company
. They set up a sparse web site lacking specific product descriptions. Inaura Inc. describes itself as "formerly known as" Real Systems Inc. and Echo Identity Systems expired as a company in June 2008.
In 2008, the Aurora Inc. website was updated to provide details on the company and product. The K9 device is now being branded as the K9 Ultra Mobile Authentication Key (UMAK) and marketed as "solving the problem of trust within all computing environments". It refers to "the two iterations of UMAKs" as "the K9 and the BlackDog"
The K9 product seems to have been publicly sold since early 2009.
In February 2010, the company name expired "failing to file for renewal" and was only re-registered September 2011., expiring again in January 2013.
References
External links
Project BlackDog Homepage/SDK site
Inaura Company Homepage
PC Plus (UK) Review of the Blackdog inc. picture
Geek.com review
Linux
Mobile computers
Linux-based devices
Computer storage devices | BlackDog | [
"Technology"
] | 2,167 | [
"Computer storage devices",
"Recording devices"
] |
311,330 | https://en.wikipedia.org/wiki/Aripiprazole | Aripiprazole, sold under the brand names Abilify and Aristada, among others, is an atypical antipsychotic primarily used in the treatment of schizophrenia, bipolar disorder, and irritability associated with autism spectrum disorder; other uses include as an add-on treatment for major depressive disorder and tic disorders. Aripiprazole is taken by mouth or via injection into a muscle. A Cochrane review found low-quality evidence of effectiveness in treating schizophrenia.
Common side effects include restlessness, insomnia, transient weight gain, nausea, vomiting, constipation, dizziness, and mild sedation. Serious side effects may include neuroleptic malignant syndrome, tardive dyskinesia, and anaphylaxis. It is not recommended for older people with dementia-related psychosis due to an increased risk of death. In pregnancy, there is evidence of possible harm to the fetus. It is not recommended in women who are breastfeeding. It has not been very well studied in people less than 18 years old.
Aripiprazole was approved for medical use in the United States in 2002. It is available as a generic medication. In 2022, it was the 106th most commonly prescribed medication in the United States, with more than 6million prescriptions. It is on the World Health Organization's List of Essential Medicines.
Medical uses
Aripiprazole is primarily used for the treatment of schizophrenia or bipolar disorder.
Schizophrenia
The 2016 National Institute for Health and Care Excellence (NICE) guidance for treating psychosis and schizophrenia in children and young people recommended aripiprazole as a second-line treatment after risperidone for people between 15 and 17 who are having an acute exacerbation or recurrence of psychosis or schizophrenia. A 2014 NICE review of the depot formulation of the drug found that it might have a role in treatment as an alternative to other depot formulations of second-generation antipsychotics for people who have trouble taking medication as directed or who prefer it.
A 2014 Cochrane review comparing aripiprazole and other atypical antipsychotics found that it is difficult to determine differences as data quality is poor. A 2011 Cochrane review comparing aripiprazole with placebo concluded that high dropout rates in clinical trials, and a lack of outcome data regarding general functioning, behavior, mortality, economic outcomes, or cognitive functioning make it difficult to definitively conclude that aripiprazole is useful for the prevention of relapse. A Cochrane review found only low-quality evidence of effectiveness in treating schizophrenia. Accordingly, part of its methodology on quality of evidence is based on the quantity of qualified studies.
A 2013 review placed aripiprazole in the middle range of 15 antipsychotics for effectiveness, approximately as effective as haloperidol and quetiapine and slightly more effective than ziprasidone, chlorpromazine, and asenapine, with better tolerability compared to the other antipsychotic drugs (4th best for reducing weight gain, 5th best for reducing extrapyramidal symptoms, best for reducing prolactin levels, 2nd best for prolongated QTc interval, and 5th best for sedative symptoms). The authors concluded that for acute psychotic episodes, aripiprazole results in benefits in some aspects of the condition.
In 2013 the World Federation of Societies for Biological Psychiatry recommended aripiprazole for the treatment of acute exacerbations of schizophrenia as a Grade 1 recommendation and evidence level A.
The British Association for Psychopharmacology similarly recommends that all persons presenting with psychosis receive treatment with an antipsychotic and that such treatment should continue for at least 1–2 years, as "There is no doubt that antipsychotic discontinuation is strongly associated with relapse during this period". The guideline further notes that "Established schizophrenia requires continued maintenance with doses of antipsychotic medication within the recommended range (Evidence level A)".
The British Association for Psychopharmacology and the World Federation of Societies for Biological Psychiatry suggest that there is little difference in effectiveness between antipsychotics in the prevention of relapse, and recommend that the specific choice of antipsychotic be chosen based on each person's preference and side effect profile. The latter group recommends switching to aripiprazole when excessive weight gain is encountered during treatment with other antipsychotics.
Bipolar disorder
Aripiprazole is effective for the treatment of acute manic episodes of bipolar disorder in adults, children, and adolescents. Used as maintenance therapy, it is useful for the prevention of manic episodes but is not useful for bipolar depression. Thus, it is often used in combination with an additional mood stabilizer; however, co-administration with a mood stabilizer increases the risk of extrapyramidal side effects. In September 2014, aripiprazole had a UK marketing authorization for up to twelve weeks of treatment for moderate to severe manic episodes in bipolar I disorder in young people aged thirteen and older. Aripiprazole in low doses of 2.5 mg can cause mania in those with Bipolar disorder.
Depression
Aripiprazole is an effective add-on treatment for major depressive disorder; however, there is a greater rate of side effects such as weight gain and movement disorders. The overall benefit is small to moderate and its use appears to neither improve quality of life nor functioning. Aripiprazole may interact with some antidepressants, especially selective serotonin reuptake inhibitors (SSRIs) that are metabolized by CYP2D6. There are known interactions with fluoxetine and paroxetine and it appears lesser interactions with sertraline, escitalopram, citalopram and fluvoxamine. CYP2D6 inhibitors increase aripiprazole concentrations to 2–3 times their normal level. When strong CYP2D6 SSRIs (such as fluoxetine, paroxetine) are co-administered, the FDA recommends dose monitoring, although it is not clear the SSRI dose should be lowered.
Autism
Short-term data (8 weeks) shows reduced irritability, hyperactivity, inappropriate speech, and stereotypy, but no change in lethargic behaviours. Adverse effects include weight gain, sleepiness, drooling, and tremors. It is suggested that children and adolescents need to be monitored regularly while taking this medication to evaluate if this treatment option is still effective after long-term use and note if side effects are worsening. Further studies are needed to understand if this drug is helpful for children after long-term use.
Tic disorders
Aripiprazole is approved for the treatment of Tourette syndrome and other tic disorders. It is effective, safe, and well-tolerated for this use per systematic reviews and meta-analyses.
Obsessive-compulsive disorder
A 2014 systematic review and meta-analysis concluded that add-on therapy with low-dose aripiprazole is an effective treatment for obsessive-compulsive disorder (OCD) that does not improve with selective serotonin reuptake inhibitors (SSRIs) alone. The conclusion was based on the results of two relatively small, short-term trials, each of which demonstrated improvements in symptoms. However, aripiprazole is cautiously recommended by a 2017 review on antipsychotics for OCD. Aripiprazole is not currently approved for the treatment of OCD and is instead used off-label for this indication. Depending on the dose, aripiprazole can increase impulse control issues in a small percentage of people. The FDA Drug Safety Communication warned about this side effect.
Available forms
Aripiprazole is available in the form of oral tablets, orally disintegrating tablets, oral solutions, oral films, and as injectables for intramuscular administration. It is also available in the form of aripiprazole lauroxil, a lipophilic ester prodrug of aripiprazole for use as a long-acting injectable.
Contraindications
Contraindications to aripiprazole include known hypersensitivity to aripiprazole, among others.
Adverse effects
In the elderly with dementia, there is an increased risk of death.
In children, adolescents, and young adults, there is an increased risk of suicide.
In adults, side effects with greater than 10% incidence include weight gain, mania, headache, akathisia, insomnia, delirium, and gastrointestinal effects like nausea, constipation, and lightheadedness. Side effects in children are similar, and include sleepiness, increased appetite, and stuffy nose. A strong desire to gamble, binge eat, shop, and engage in sexual activity may also occur rarely. These urges can be uncontrollable.
Uncontrolled movement such as restlessness, tremors, and muscle rigidity may occur.
Discontinuation
The British National Formulary recommends a gradual withdrawal when discontinuing antipsychotics to avoid acute withdrawal syndrome or rapid relapse. Symptoms of withdrawal commonly include nausea, vomiting, and loss of appetite. Other symptoms may include restlessness, increased sweating, and trouble sleeping. Less commonly there may be a feeling of the world spinning, numbness, or muscle pains. Symptoms generally resolve after a short period of time.
There is tentative evidence that discontinuation of antipsychotics can result in psychosis as a part of a withdrawal syndrome. It may also result in reoccurrence of the condition that is being treated. Rarely tardive dyskinesia can occur when the medication is stopped.
Overdose
Children or adults who ingested acute overdoses have usually manifested central nervous system depression ranging from mild sedation to coma; serum concentrations of aripiprazole and dehydroaripiprazole in these people were elevated by up to 3–4 fold over normal therapeutic levels; as of 2008, no deaths had been recorded.
Interactions
Aripiprazole is a substrate of CYP2D6 and CYP3A4. Coadministration with medications that inhibit (e.g. paroxetine, fluoxetine) or induce (e.g. carbamazepine) these metabolic enzymes are known to increase and decrease, respectively, plasma levels of aripiprazole.
Precautions should be taken in people with an established diagnosis of diabetes mellitus who are started on atypical antipsychotics along with other medications that affect blood sugar levels and should be monitored regularly for worsening of glucose control. The liquid form (oral solution) of this medication may contain up to 15 grams of sugar per dose.
Antipsychotics like aripiprazole and stimulant medications, such as amphetamine, are traditionally thought to have opposing effects because both drugs affect dopaminergic neurons. However, both stimulants and antipsychotics lead to increases in synaptic dopamine levels. In antipsychotics, this is caused by the inhibition of dopamine autoreceptors as well as the effects of antipsychotics on non-dopaminergic receptors, while in amphetamine this is caused by non-competitive inhibition of dopamine reuptake and agonism of intracellular TAAR1. Therefore aripiprazole may interact with amphetamine to synergistically increase postsynaptic levels of dopamine. This interaction frequently occurs in the setting of comorbid attention deficit hyperactivity disorder (ADHD) (for which stimulants are commonly prescribed) and off-label treatment of aggression with antipsychotics. Aripiprazole has been reported to provide some benefit in improving cognitive functioning in people with ADHD without other psychiatric comorbidities, though the results have been disputed. The combination of antipsychotics like aripiprazole with stimulants should not be considered an absolute contraindication.
Pharmacology
Pharmacodynamics
Aripiprazole's mechanism of action is different from those of the other FDA-approved atypical antipsychotics (e.g., clozapine, olanzapine, quetiapine, ziprasidone, and risperidone). It shows differential engagement at the dopamine receptor (D2). Aripiprazole is a partial agonist at dopamine D2 receptors, a partial agonist at 5-HT1A receptors, and an antagonist or very weak partial agonist at 5-HT2A receptors.
It appears to show predominantly partial agonistic activity on postsynaptic D2 receptors and partial agonist activity on presynaptic D2 receptors, D3, and partially D4 and is a partial activator of serotonin (5-HT1A, 5-HT2A, 5-HT2B, 5-HT6, and 5-HT7). It also shows lower effect on histamine (H1), as well as the serotonin transporter. Aripiprazole acts by modulating neurotransmission overactivity of dopamine, which is thought to mitigate schizophrenia symptoms.
There are studies to date confirming aripiprazole as an antagonist at alpha-adrenergic receptors such as α1A, α2A and α2C, the orthostatic hypotension observed with aripiprazole may be explained by its antagonist activity at adrenergic α1A receptors.
As a pharmacologically unique antipsychotic with pronounced functional selectivity, characterization of this dopamine D2 partial agonist (with an intrinsic activity of ~50%) as being similar to a full agonist but at a reduced level of activity presents a misleading oversimplification of its actions; for example, among other effects, aripiprazole has been shown, in vitro, to bind to and/or induce receptor conformations (i.e., facilitate receptor shapes) in such a way as to not only prevent receptor internalization (and, thus, lower receptor density) but even to lower the rate of receptor internalization below that of neurons not in the presence of agonists (including dopamine) or antagonists. It is often the nature of partial agonists, including aripiprazole, to display a stabilizing effect (such as on mood in this case) with agonistic activity when there are low levels of endogenous neurotransmitters (such as dopamine) and antagonistic activity in the presence of high levels of agonists associated with events such as mania, psychosis, and drug use. In addition to aripiprazole's partial agonism and functional selectivity characteristics, its effectiveness may be mediated by its very high dopamine D2 receptor occupancy (approximately 31%, 44%, 75%, 80%, and 95% at daily dosages of 0.5 mg, 2 mg, 10 mg, 30mg and 40 mg respectively) Aripiprazole has been characterized as possessing predominantly partial agonist activity on postsynaptic D2 receptors and partial agonist activity on presynaptic D2 receptors; however, while this explanation intuitively explains the drug's efficacy as an antipsychotic, as the degree of agonism is a function of more than a drug's inherent properties as well as in vitro demonstration of aripiprazole's partial agonism in cells expressing postsynaptic (D2L) receptors, it was noted that "It is unlikely that the differential actions of aripiprazole as an agonist, antagonist, or partial agonist were entirely due to differences in relative D2 receptor expression since aripiprazole was an antagonist in cells with the highest level of expression (4.6 pmol/mg) and a partial agonist in cells with an intermediate level of expression (0.5–1 pmol/mg). Instead, the current data are most parsimoniously explained by the "functional selectivity" hypothesis of Lawler et al. (1999)". Aripiprazole is also a partial agonist of the D3 receptor. In healthy human volunteers, D2 and D3 receptor occupancy levels are high, with average levels ranging between approximately 75% at 2 mg/day to approximately 95% at 40 mg/day. Most atypical antipsychotics bind preferentially to extrastriatal receptors, but aripiprazole appears to be less preferential in this regard, as binding rates are high throughout the brain.
Aripiprazole is also a partial agonist of the postsynaptic serotonin 5-HT1A receptor (intrinsic activity = 68%). a PET scan study of 12 patients receiving doses ranging from 10 to 30 mg found 5-HT1A receptor occupancy to be only 16% compared to ~90% for D2. It is a very weak partial agonist of the Postsynaptic 5-HT2A receptor (intrinsic activity = 12.7%). The drug differs from other atypical antipsychotics in having higher affinity for the D2 receptor than for the 5-HT2A receptor. At the 5-HT2B receptor, aripiprazole has both great binding affinity and acts as a potent inverse agonist, "Aripiprazole decreased PI hydrolysis from a basal level of 61% down to a low of 30% at 1000 nM, with an EC50 of 11 nM". Unlike other antipsychotics, aripiprazole is a high-efficacy partial agonist of the postsynaptic 5-HT2C receptor (intrinsic activity = 82%) this property may underlie the minimal weight gain seen in the course of therapy, however if used while taking antidepressants it will become a functional antagonist and increase weight gain. At the presynaptic 5-HT7 receptor, aripiprazole is a very weak partial agonist with barely measurable intrinsic activity, and hence is a functional antagonist of this receptor. Aripiprazole also shows lower but likely clinically insignificant affinity for a number of other sites such as the serotonin transporter, while it has negligible affinity for the muscarinic acetylcholine receptors
Since the actions of aripiprazole differ markedly across receptor systems aripiprazole was sometimes an antagonist (e.g., at 5-HT6), sometimes an inverse agonist (e.g., 5-HT2B), sometimes a partial agonist (e.g., D2S, D3S, D4S, D2L). Aripiprazole was frequently found to be a partial agonist or full agonist, with an intrinsic activity that could be low (5-HT2A, 5-HT7), intermediate (D2L, 5-HT1A), or high (5-HT2C). This mixture of agonist actions at D2-dopamine receptors is consistent with the hypothesis that aripiprazole has "functionally selective" actions. The "functional-selectivity" hypothesis proposes that a mixture of agonist/partial agonist/antagonist actions are likely. According to this hypothesis, agonists may induce structural changes in receptor conformations that are differentially "sensed" by the local complement of G proteins to induce a variety of functional actions depending upon the precise cellular milieu. The diverse actions of aripiprazole at D2-dopamine receptors are clearly cell-type specific (e.g., agonism, antagonism, partial agonism), and are most parsimoniously explained by the "functional selectivity" hypothesis.
Since 5-HT2C receptors have been implicated in the control of depression, obsessive–compulsive disorder (OCD), and appetite, postsynaptic partial agonism at the 5-HT2C receptor might be associated with therapeutic potential in obsessive-compulsive disorder, obesity, and depression. 5-HT2C agonism has been demonstrated to induce anorexia via enhancement of serotonergic neurotransmission via activation of postsynaptic 5-HT2C receptors; it is conceivable that the 5-HT2C partial agonist actions of aripiprazole may, thus, be partly responsible for the minimal weight gain associated with this compound in clinical trials. In terms of potential action as an antiobsessional agent, it is worthwhile noting that a variety of 5-HT2A/5-HT2C agonists have shown promise as antiobsessional agents, yet many of these compounds are hallucinogenic. Aripiprazole has a favorable pharmacological profile in being a 5-HT2C partial agonist. Based on this profile, one can predict that aripiprazole may have antiobsessional and anorectic actions in humans.
Wood and Reavill's (2007) review of published and unpublished data proposed that, at therapeutically relevant doses, aripiprazole may act essentially as a selective partial agonist of the D2 receptor without significantly affecting the majority of serotonin receptors. A positron emission tomography imaging study found that 10 to 30 mg/day aripiprazole resulted in 85 to 95% occupancy of the D2 receptor in various brain areas (putamen, caudate, ventral striatum) versus 54 to 60% occupancy of the 5-HT2A receptor and only 16% occupancy of the 5-HT1A receptor. It has been suggested that the low occupancy of the 5-HT1A receptor by aripiprazole may have been an erroneous measurement however.
Aripiprazole acts by modulating neurotransmission overactivity on the dopaminergic mesolimbic pathway, which is thought to be a cause of positive schizophrenia symptoms. Due to its partial agonist activity on D2L receptors, aripiprazole may also increase dopaminergic activity to optimal levels in the mesocortical pathways where it is reduced.
Pharmacokinetics
Aripiprazole displays linear kinetics and has an elimination half-life of approximately 75 hours. Steady-state plasma concentrations are achieved in about 14 days. Cmax (maximum plasma concentration) is achieved 3–5 hours after oral dosing. Bioavailability of the oral tablets is about 90% and the drug undergoes extensive hepatic metabolization (dehydrogenation, hydroxylation, and N-dealkylation), principally by the enzymes CYP2D6 and CYP3A4. Its only known active metabolite is dehydro-aripiprazole, which typically accumulates to approximately 40% of the aripiprazole concentration. The parenteral drug is excreted only in traces, and its metabolites, active or not, are excreted via feces and urine.
Chemistry
Aripiprazole belongs to the chemical class of drugs called 2,3-dichlorophenylpiperazines and is chemically related to cariprazine, nefazodone, etoperidone, and trazodone. It is unusual in having twelve known crystalline polymorphs.
History
Aripiprazole was discovered in 1988 by scientists at the Japanese firm Otsuka Pharmaceutical and was called OPC-14597. It was first published in 1995. Otsuka initially developed the drug, and partnered with Bristol-Myers Squibb (BMS) in 1999 to complete development, obtain approvals, and market aripiprazole.
It was approved by the US Food and Drug Administration (FDA) for schizophrenia in November 2002, and by the European Medicines Agency in June 2004; for acute manic and mixed episodes associated with bipolar disorder on 1 October 2004; as an adjunct for major depressive disorder on 20 November 2007; and to treat irritability in children with autism on 20 November 2009. Likewise it was approved for use as a treatment for schizophrenia by the Therapeutic Goods Administration (TGA) of Australia in May 2003.
Aripiprazole has been approved by the FDA for the treatment of both acute manic and mixed episodes, in people older than ten years.
In 2006, the FDA required manufacturers to add a black box warning to the label, warning that older people who were given the drug for dementia-related psychosis were at greater risk of death.
In 2007, aripiprazole was approved by the FDA for the treatment of unipolar depression when used adjunctively with an antidepressant medication. That same year, BMS settled a case with the US government in which it paid $515 million; the case covered several drugs but the focus was on BMS's off-label marketing of aripiprazole for children and older people with dementia.
In 2011 Otsuka and Lundbeck signed a collaboration to develop a depot formulation of aripiprazole.
As of 2013, Abilify had annual sales of . In 2013 BMS returned marketing rights to Otsuka, but kept manufacturing the drug. Also in 2013, Otsuka and Lundbeck received US and European marketing approval for an injectable depot formulation of aripiprazole.
Otsuka's US patent on aripiprazole expired on 20 October 2014, but due to a pediatric extension, a generic did not become available until 20 April 2015. Barr Laboratories (now Teva Pharmaceuticals) initiated a patent challenge under the Hatch-Waxman Act in March 2007. On 15 November 2010, this challenge was rejected by the U.S. District Court in New Jersey.
Otsuka's European patent EP0367141 which would have expired on 26 October 2009, was extended by a Supplementary Protection Certificate (SPC) to 26 October 2014., The UK Intellectual Property Office decided on 4 March 2015 that the SPC could not be further extended by six months under Regulation (EC) No 1901/2006. Even if the decision is successfully appealed, protection in Europe will not extend beyond 26 April 2015.
From April 2013 to March 2014, sales of Abilify amounted to almost $6.9 billion.
In April 2015, the FDA announced the first generic versions. In October 2015, aripiprazole lauroxil, a prodrug of aripiprazole that is administered via intramuscular injection once every four to six weeks for the treatment of schizophrenia, was approved by the FDA.
In 2016, BMS settled cases with 42 US states that had charged BMS with off-label marketing to older people with dementia; BMS agreed to pay $19.5 million.
In November 2017, the FDA approved Abilify MyCite, a digital pill containing a sensor intended to record when its consumer takes their medication.
A long-acting injectable version of aripiprazole was approved by the FDA for the treatment of bipolar disorder 1 and schizophrenia in April 2023.
In 2024, the European Commission approved the long-acting injectable formulation of aripiprazole for the maintenance treatment of schizophrenia.
Society and culture
Legal status
Classification
Aripiprazole has been described as the prototypical third-generation antipsychotic, as opposed to first-generation (typical) antipsychotics like haloperidol and second-generation (atypical) antipsychotics like clozapine. It has received this classification due to its partial agonism of dopamine receptors, and is the first of its kind in this regard among antipsychotics, which before aripiprazole acted only as dopamine receptor antagonists. The introduction of aripiprazole has led to a paradigm shift from a dopamine antagonist-based approach to a dopamine agonist-based approach for antipsychotic drug development.
Brand names
Brand names of aripiprazole include Abilify, Aristada (as aripiprazole lauroxil), Arip MT, Explemed, and Arivitae, among numerous others.
Research
Attention deficit hyperactivity disorder
Aripiprazole was under development for the treatment of attention-deficit hyperactivity disorder (ADHD), but development for this indication was discontinued. A 2017 meta review found only preliminary evidence (studies with small sample sizes and methodological problems) for aripiprazole in the treatment of ADHD. A 2013 systematic review of aripiprazole for ADHD similarly reported that there is insufficient evidence of effectiveness to support aripiprazole as a treatment for the condition. Although all 6 non-controlled open-label studies in the review reported effectiveness, two small randomized controlled trials found that aripiprazole did not significantly decrease ADHD symptoms. A high rate of adverse effects with aripiprazole such as weight gain, sedation, and headache was noted. Most research on aripiprazole for ADHD is in children and adolescents. Evidence on aripiprazole specifically for adult ADHD appears to be limited to a single case report.
Substance dependence
Aripiprazole has been studied for the treatment of amphetamine dependence and other substance use disorders, but more research is needed to support aripiprazole for these potential uses. Available evidence of aripiprazole for amphetamine dependence is mixed. Some studies have reported attenuation of the effects of amphetamines by aripiprazole, whereas other studies have reported both enhancement of the effects of amphetamines and increased use of amphetamines by aripiprazole. As such, aripiprazole may not only be ineffective but potentially harmful for treatment of amphetamine dependence, and caution is warranted with regard to its use for such purposes.
Other uses
As of May 2021, aripiprazole is in phase III clinical trials for the treatment of agitation and pervasive child development disorders.
References
Further reading
External links
2,3-Dichlorophenylpiperazines
5-HT2A antagonists
5-HT2B antagonists
5-HT2C agonists
5-HT7 antagonists
Alpha-1 blockers
Alpha-2 blockers
Antidepressants
Atypical antipsychotics
Drugs developed by Bristol Myers Squibb
D2 antagonists
D2-receptor agonists
D3 antagonists
D3 receptor agonists
Ethers
H1 receptor antagonists
Mood stabilizers
Otsuka Pharmaceutical
Serotonin-dopamine activity modulators
Tetrahydroquinolines
Treatment of autism
Wikipedia medicine articles ready to translate | Aripiprazole | [
"Chemistry"
] | 6,501 | [
"Organic compounds",
"Functional groups",
"Ethers"
] |
311,366 | https://en.wikipedia.org/wiki/American%20wire%20gauge | American Wire Gauge (AWG) is a logarithmic stepped standardized wire gauge system used since 1857, predominantly in North America, for the diameters of round, solid, nonferrous, electrically conducting wire. Dimensions of the wires are given in ASTM standard B 258. The cross-sectional area of each gauge is an important factor for determining its current-carrying capacity.
Origin
The AWG originated in the number of drawing operations used to produce a given gauge of wire. Very fine wire (for example, 30 gauge) required more passes through the drawing dies than 0 gauge wire did. Manufacturers of wire formerly had proprietary wire gauge systems; the development of standardized wire gauges rationalized selection of wire for a particular purpose.
While the AWG is essentially identical to the Brown & Sharpe (B&S) sheet metal gauge, the B&S gauge was designed for use with sheet metals as its name suggests. These are functionally interchangeable but the use of B&S in relation to wire gauges, rather than sheet metal gauges, is technically improper.
Specifications
Increasing gauge numbers denote logarithmically decreasing wire diameters, which is similar to many other non-metric gauging systems such as British Standard Wire Gauge (SWG). However, AWG is dissimilar to IEC 60228, the metric wire-size standard used in most parts of the world, based directly on the wire cross-section area (in square millimetres, mm2).
The AWG tables are for a single, solid and round conductor. The AWG of a stranded wire is determined by the cross-sectional area of the equivalent solid conductor. Because there are also small gaps between the strands, a stranded wire will always have a slightly larger overall diameter than a solid wire with the same AWG.
Formulae
By definition, 36 AWG is 0.005 inches in diameter, and 0000 AWG is 0.46 inches in diameter. The ratio of these diameters is 1:92, and there are 40 gauge sizes from 36 to 0000, or 39 steps. Because each successive gauge number increases cross sectional area by a constant multiple, diameters vary geometrically. Any two successive gauges (e.g., and ) have diameters whose ratio is (approximately 1.12293), while for gauges two steps apart (e.g., , , and ), the ratio of the to is about 1.122932 ≈ 1.26098. Similarly for gauges n steps apart the ratio of the first to last gauges is about 1.12293n.
The diameter of an AWG wire is determined according to the following formula:
(where is the AWG size for gauges from 36 to 0, for 00, for 000, and for 0000. See below for rule.)
or equivalently:
The gauge can be calculated from the diameter using
and the cross-section area is
.
The standard ASTM B258-02 (2008), Standard Specification for Standard Nominal Diameters and Cross-Sectional Areas of AWG Sizes of Solid Round Wires Used as Electrical Conductors, defines the ratio between successive sizes to be the 39th root of 92, or approximately 1.1229322. ASTM B258-02 also dictates that wire diameters should be tabulated with no more than 4 significant figures, with a resolution of no more than 0.0001 inches (0.1 mils) for wires thicker than 44 AWG, and 0.00001 inches (0.01 mils) for wires 45 AWG and thinner.
Sizes with multiple zeros are successively thicker than 0 AWG and can be denoted using "number of zeros/0", for example 4/0 AWG for 0000 AWG. For an /0AWG wire, use in the above formulas. For instance, for 0000 AWG or 4/0 AWG, use .
Rules of thumb
The sixth power of is very close to 2, which leads to the following rules of thumb:
When the cross-sectional area of a wire is doubled, the AWG will decrease by 3. (E.g. two 14 AWG wires have about the same cross-sectional area as a single 11 AWG wire.) This doubles the conductance.
When the diameter of a solid round wire is doubled, the AWG will decrease by 6. (E.g. 1 mm diameter wire is ~18 AWG, 2 mm diameter wire is ~12 AWG, and 4 mm diameter wire is ~6 AWG). This quadruples the cross-sectional area and conductance.
A decrease of ten gauge numbers (E.g. from 12 AWG to 2 AWG) multiplies the area and weight by approximately 10, and reduces the electrical resistance (and increases the conductance) by a factor of approximately 10.
Convenient coincidences result in the following rules of thumb for resistances:
The resistance of copper wire is approximately for 10 AWG, for 20 AWG, for 30 AWG, and so on.
Because aluminum wire has a conductivity of approximately 61% of copper, an aluminum wire has nearly the same resistance as a copper wire that is two sizes smaller, which has 62.9% of the area.
Tables of AWG wire sizes
The table below shows various data including both the resistance of the various wire gauges and the allowable current (ampacity) based on a copper conductor with plastic insulation. The diameter information in the table applies to solid wires. Stranded wires are calculated by calculating the equivalent cross sectional copper area. Fusing current (melting wire) is estimated based on ambient temperature. The table below assumes DC, or AC frequencies equal to or less than 60 Hz, and does not take skin effect into account. "Turns of wire per unit length" is the reciprocal of the conductor diameter; it is therefore an upper limit for wire wound in the form of a helix (see solenoid), based on uninsulated wire.
In the North American electrical industry, conductors thicker than 4/0AWG are generally identified by the area in thousands of circular mils (kcmil), where 1 kcmil = 0.5067 mm2. The next wire size thicker than 4/0 has a cross section of 250 kcmil. A circular mil is the area of a wire one mil in diameter. One million circular mils is the area of a circle with 1,000 mil (1 inch) diameter. An older abbreviation for one thousand circular mils is MCM.
Stranded wire AWG sizes
AWG can also be used to describe stranded wire. The AWG of a stranded wire represents the sum of the cross-sectional diameter of the individual strands; the gaps between strands are not counted. When made with circular strands, these gaps occupy about 25% of the wire area, thus requiring the overall bundle diameter to be about 13% larger than a solid wire of equal gauge.
Stranded wires are specified with three numbers, the overall AWG size, the number of strands, and the AWG size of a strand. The number of strands and the AWG of a strand are separated by a slash. For example, a 22AWG 7/30 stranded wire is a 22AWG wire made from seven strands of 30AWG wire.
As indicated in the Formulas and Rules of Thumb sections above, differences in AWG translate directly into ratios of diameter or area. This property can be employed to easily find the AWG of a stranded bundle by measuring the diameter and count of its strands. (This only applies to bundles with circular strands of identical size.) To find the AWG of 7-strand wire with equal strands, subtract 8.4 from the AWG of a strand. Similarly, for 19-strand subtract 12.7, and for 37 subtract 15.6.
Measuring strand diameter is often easier and more accurate than attempting to measure bundle diameter and packing ratio. Such measurement can be done with a wire gauge go-no-go tool or with a caliper or micrometer.
Nomenclature and abbreviations in electrical distribution
Alternative ways are commonly used in the electrical industry to specify wire sizes as AWG.
4 AWG (proper)
#4 (the number sign is used as an abbreviation of "number")
№ 4 (the numero sign is used as an abbreviation for "number")
No. 4 (an approximation of the numero is used as an abbreviation for "number")
No. 4 AWG
4 ga. (abbreviation for "gauge")
000 AWG (proper for thick sizes)
3/0 (common for thick sizes) Pronounced "three-aught"
3/0 AWG
#000
Pronunciation
AWG is colloquially referred to as gauge and the zeros in thick wire sizes are referred to as aught . Wire sized 1 AWG is referred to as "one gauge" or "No. 1" wire; similarly, thinner sizes are pronounced " gauge" or "No. " wire, where is the positive-integer AWG number. Consecutive AWG wire sizes thicker than No. 1 wire are designated by the number of zeros:
No. 0, often written 1/0 and referred to as "one aught" wire
No. 00, often written 2/0 and referred to as "two aught" wire
No. 000, often written 3/0 and referred to as "three aught" wire
and so on.
See also
IEC 60228, international standards for wire sizes
French gauge
Brown & Sharpe
Circular mil, North American Electrical industry standard for wires thicker than 4/0.
Birmingham Wire Gauge
Stubs Iron Wire Gauge
Jewelry wire gauge
Body jewelry sizes, which commonly uses AWG (especially for thinner sizes), even when the material is not metallic.
Electrical wiring
Number 8 wire, a term used in the New Zealand vernacular
References
Wire gauges
Customary units of measurement in the United States
Logarithmic scales of measurement | American wire gauge | [
"Physics",
"Mathematics"
] | 2,051 | [
"Quantity",
"Logarithmic scales of measurement",
"Physical quantities"
] |
311,395 | https://en.wikipedia.org/wiki/Distributed%20Language%20Translation | Distributed Language Translation (, DLT) was a project to develop an interlingual machine translation system for twelve European languages. It ran between 1985 and 1990.
The distinctive feature of DLT was the use of [a version of] Esperanto as an intermediate language (IL) and the idea that translation could be divided into two stages: from L1 into IL and then from IL into L2. The intermediate translation could be transmitted over a network to any number of workstations which would take care of the translation from IL into the desired language. Since the IL format would have been disambiguated at the source, it could itself serve as a source for further translation without human intervention. — Job M. van Zuijlen (one of the DLT researchers)
DLT was undertaken by the Dutch software house BSO (now part of Atos Origin) in Utrecht in cooperation with the now defunct Dutch airplane manufacturer Fokker and the Universal Esperanto Association.
A prototype application of DLT in technical translation (through 'AECMA Simplified English', in collaboration with Dutch aircraft manufacturer Fokker) achieved an accuracy rate of around 95 percent. Not only the specific technical vocabulary was checked, but also narrow and broad contexts. For more general texts (e.g. reports from UNESCO meetings), the accuracy of the translation was around 50 to 60 percent. BSO failed to attract investment for a further development phase after 1990, and DLT was abandoned unfinished. However, the value of this research project, which according to external experts was very promising, remains in the form of published articles and a whole series of books, detailed and comprehensive enough to support future developments, as if according to the concept of "open source".
See also
Indigenous Dialogues
External links
Examples of DLT
References
Translation
Machine translation
Esperanto organizations
Controlled natural languages
1985 establishments in the Netherlands
Projects established in 1985
1985 software
1990 disestablishments in the Netherlands
Products and services discontinued in 1990 | Distributed Language Translation | [
"Technology"
] | 404 | [
"Machine translation",
"Natural language and computing"
] |
311,400 | https://en.wikipedia.org/wiki/Common%20year | A common year is a calendar year with 365 days, as distinguished from a leap year, which has 366 days. More generally, a common year is one without intercalation. The Gregorian calendar (like the earlier Julian calendar) employs both common years and leap years to keep the calendar aligned with the tropical year, which does not contain an exact number of days.
The common year of 365 days has 52 weeks and one day, hence a common year always begins and ends on the same day of the week (for example, January 1 and December 31 will fall on a Wednesday in 2025) and the year following a common year will start on the subsequent day of the week. In common years, February has exactly four weeks, so March begins on the same day of the week. November also begins on this day. For example, February 2025 begins on a Saturday, so March will begin on a Saturday as well. November will follow the same characteristic.
Each common year has 179 even-numbered days and 186 odd-numbered days.
In the Gregorian calendar, 303 out of every 400 years are common years. By comparison, in the Julian calendar, 300 out of every 400 years are common years, and in the Revised Julian calendar (used by Greece) 682 out of every 900 years are common years.
Calendars
Common year starting on Monday
Common year starting on Tuesday
Common year starting on Wednesday
Common year starting on Thursday
Common year starting on Friday
Common year starting on Saturday
Common year starting on Sunday
Calendars
Types of year
Units of time | Common year | [
"Physics",
"Mathematics"
] | 311 | [
"Calendars",
"Physical quantities",
"Time",
"Time stubs",
"Units of time",
"Quantity",
"Spacetime",
"Units of measurement"
] |
311,433 | https://en.wikipedia.org/wiki/Obesity%20hypoventilation%20syndrome | Obesity hypoventilation syndrome (OHS) is a condition in which severely overweight people fail to breathe rapidly or deeply enough, resulting in low oxygen levels and high blood carbon dioxide (CO2) levels. The syndrome is often associated with obstructive sleep apnea (OSA), which causes periods of absent or reduced breathing in sleep, resulting in many partial awakenings during the night and sleepiness during the day. The disease puts strain on the heart, which may lead to heart failure and leg swelling.
Obesity hypoventilation syndrome is defined as the combination of obesity and an increased blood carbon dioxide level during the day that is not attributable to another cause of excessively slow or shallow breathing.
The most effective treatment is weight loss, but this may require bariatric surgery to achieve. Weight loss of 25 to 30% is usually required to resolve the disorder. The other first-line treatment is non-invasive positive airway pressure (PAP), usually in the form of continuous positive airway pressure (CPAP) at night. The disease was known initially in the 1950s, as "Pickwickian syndrome" in reference to a Dickensian character.
Signs and symptoms
Most people with obesity hypoventilation syndrome have concurrent obstructive sleep apnea, a condition characterized by snoring, brief episodes of apnea (cessation of breathing) during the night, interrupted sleep and excessive daytime sleepiness. In OHS, sleepiness may be worsened by elevated blood levels of carbon dioxide, which causes drowsiness ("CO2 narcosis"). Other symptoms present in both conditions are depression, and hypertension (high blood pressure) which is difficult to control with medication. The high carbon dioxide can also cause headaches, which tend to be worsening in the morning.
The low oxygen level leads to physiologic constriction of the pulmonary arteries to correct ventilation-perfusion mismatching, which puts excessive strain on the right side of the heart. When this leads to right sided heart failure, it is known as cor pulmonale. Symptoms of this disorder occur because the heart has difficulty pumping blood from the body through the lungs. Fluid may, therefore, accumulate in the skin of the legs in the form of edema (swelling), and in the abdominal cavity in the form of ascites; decreased exercise tolerance and exertional chest pain may occur. On physical examination, characteristic findings are the presence of a raised jugular venous pressure, a palpable parasternal heave, a heart murmur due to blood leaking through the tricuspid valve, hepatomegaly (an enlarged liver), ascites and leg edema. Cor pulmonale occurs in about a third of all people with OHS.
Mechanism
It is not fully understood why some obese people develop obesity hypoventilation syndrome while others do not. It is likely that it is the result of an interplay of various processes. Firstly, work of breathing is increased as adipose tissue restricts the normal movement of the chest muscles and makes the chest wall less compliant, the diaphragm moves less effectively, respiratory muscles are fatigued more easily, and airflow in and out of the lung is impaired by excessive tissue in the head and neck area. Hence, people with obesity need to expend more energy to breathe effectively. These factors together lead to sleep-disordered breathing and inadequate removal of carbon dioxide from the circulation and hence hypercapnia; given that carbon dioxide in aqueous solution combines with water to form an acid (CO2[g] + H2O[l] + excess H2O[l] --> H2CO3[aq]), this causes acidosis (increased acidity of the blood). Under normal circumstances, central chemoreceptors in the brain stem detect the acidity, and respond by increasing the respiratory rate; in OHS, this "ventilatory response" is blunted.
The blunted ventilatory response is attributed to several factors. Obese people tend to have raised levels of the hormone leptin, which is secreted by adipose tissue and under normal circumstances increases ventilation. In OHS, this effect is reduced. Furthermore, episodes of nighttime acidosis (e.g. due to sleep apnea) lead to compensation by the kidneys with retention of the alkali bicarbonate. This normalizes the acidity of the blood. However, bicarbonate stays around in the bloodstream for longer, and further episodes of hypercapnia lead to relatively mild acidosis and reduced ventilatory response in a vicious circle.
Low oxygen levels lead to hypoxic pulmonary vasoconstriction, the tightening of small blood vessels in the lung to create an optimal distribution of blood through the lung. Persistently low oxygen levels causing chronic vasoconstriction leads to increased pressure on the pulmonary artery (pulmonary hypertension), which in turn puts strain on the right ventricle, the part of the heart that pumps blood to the lungs. The right ventricle undergoes remodeling, becomes distended and is less able to remove blood from the veins. When this is the case, raised hydrostatic pressure leads to accumulation of fluid in the skin (edema), and in more severe cases the liver and the abdominal cavity.
The chronically low oxygen levels in the blood also lead to increased release of erythropoietin and the activation of erythropoeisis, the production of red blood cells. This results in polycythemia, abnormally increased numbers of circulating red blood cells and an elevated hematocrit.
Diagnosis
Formal criteria for diagnosis of OHS are:
Body mass index over 30 kg/m2 (a measure of obesity, obtained by taking one's weight in kilograms and dividing it by one's height in meters squared)
Arterial carbon dioxide level over 45 mmHg or 6.0 kPa as determined by arterial blood gas measurement
No alternative explanation for hypoventilation, such as use of narcotics, severe obstructive or interstitial lung disease, severe chest wall disorders such as kyphoscoliosis, severe hypothyroidism (underactive thyroid), neuromuscular disease or congenital central hypoventilation syndrome
If OHS is suspected, various tests are required for its confirmation. The most important initial test is the demonstration of elevated carbon dioxide in the blood. This requires an arterial blood gas determination, which involves taking a blood sample from an artery, usually the radial artery. Given that it would be complicated to perform this test on every patient with sleep-related breathing problems, some suggest that measuring bicarbonate levels in normal (venous) blood would be a reasonable screening test. If this is elevated (27 mmol/L or higher), blood gasses should be measured.
To distinguish various subtypes, polysomnography is required. This usually requires brief admission to a hospital with a specialized sleep medicine department where a number of different measurements are conducted while the subject is asleep; this includes electroencephalography (electronic registration of electrical activity in the brain), electrocardiography (same for electrical activity in the heart), pulse oximetry (measurement of oxygen levels) and often other modalities. Blood tests are also recommended for the identification of hypothyroidism and polycythemia.
To distinguish between OHS and various other lung diseases that can cause similar symptoms, medical imaging of the lungs (such as a chest X-ray or CT/CAT scan), spirometry, electrocardiography and echocardiography may be performed. Echo- and electrocardiography may also show strain on the right side of the heart caused by OHS, and spirometry may show a restrictive pattern related to obesity.
Classification
Obesity hypoventilation syndrome is a form of sleep disordered breathing. Two subtypes are recognized, depending on the nature of disordered breathing detected on further investigations. The first is OHS in the context of obstructive sleep apnea; this is confirmed by the occurrence of 5 or more episodes of apnea, hypopnea or respiratory-related arousals per hour (high apnea-hypopnea index) during sleep. The second is OHS primarily due to "sleep hypoventilation syndrome"; this requires a rise of CO2 levels by 10 mmHg (1.3 kPa) after sleep compared to awake measurements and overnight drops in oxygen levels without simultaneous apnea or hypopnea. Overall, 90% of all people with OHS fall into the first category, and 10% in the second.
Treatment
In people with stable OHS, the most important treatment is weight loss—by diet, through exercise, with medication, or sometimes weight loss surgery (bariatric surgery). This has been shown to improve the symptoms of OHS and resolution of the high carbon dioxide levels. Weight loss may take a long time and is not always successful. If the symptoms are significant, nighttime positive airway pressure (PAP) treatment is tried; this involves the use of a machine to assist with breathing. PAP exists in various forms, and the ideal strategy is uncertain. Some medications have been tried to stimulate breathing or correct underlying abnormalities; their benefit is again uncertain.
While many people with obesity hypoventilation syndrome are cared for on an outpatient basis, some deteriorate suddenly and when admitted to the hospital may show severe abnormalities such as markedly deranged blood acidity (pH<7.25) or depressed level of consciousness due to very high carbon dioxide levels. On occasions, admission to an intensive care unit with intubation and mechanical ventilation is necessary. Otherwise, "bi-level" positive airway pressure (see the next section) is commonly used to stabilize the patient, followed by conventional treatment.
Positive airway pressure
Positive airway pressure, initially in the form of continuous positive airway pressure (CPAP), is a useful treatment for obesity hypoventilation syndrome, particularly when obstructive sleep apnea coexists. CPAP requires the use during sleep of a machine that delivers a continuous positive pressure to the airways and preventing the collapse of soft tissues in the throat during breathing; it is administered through a mask on either the mouth and nose together or if that is not tolerated on the nose only (nasal CPAP). This relieves the features of obstructive sleep apnea and is often sufficient to remove the resultant accumulation of carbon dioxide. The pressure is increased until the obstructive symptoms (snoring and periods of apnea) have disappeared. CPAP alone is effective in more than 50% of people with OHS.
In some occasions, the oxygen levels are persistently too low (oxygen saturations below 90%). In that case, the hypoventilation itself may be improved by switching from CPAP treatment to an alternate device that delivers "bi-level" positive pressure: higher pressure during inspiration (breathing in) and a lower pressure during expiration (breathing out). If this too is ineffective in increasing oxygen levels, the addition of oxygen therapy may be necessary. As a last resort, tracheostomy may be necessary; this involves making a surgical opening in the trachea to bypass obesity-related airway obstruction in the neck. This may be combined with mechanical ventilation with an assisted breathing device through the opening.
Other treatments
People who fail first-line treatments or have very severe, life-threatening disease may sometimes be treated with tracheotomy, which is a reversible procedure. Treatments without proven benefit, and concern for harm, include oxygen alone or respiratory stimulant medications. Medroxyprogesterone acetate, a progestin, and acetazolamide are both associated with an increased risk of thrombosis and are not recommended.
Prognosis
Obesity hypoventilation syndrome is associated with a reduced quality of life, and people with the condition incur increased healthcare costs, largely due to hospital admissions including observation and treatment on intensive care units. OHS often occurs together with several other disabling medical conditions, such as asthma (in 18–24%) and type 2 diabetes (in 30–32%). Its main complication of heart failure affects 21–32% of patients.
Those with abnormalities severe enough to warrant treatment have an increased risk of death reported to be 23% over 18 months and 46% over 50 months. This risk is reduced to less than 10% in those receiving treatment with PAP. Treatment also reduces the need for hospital admissions and reduces healthcare costs.
Epidemiology
The exact prevalence of obesity hypoventilation syndrome is unknown, and it is thought that many people with symptoms of OHS have not been diagnosed. About a third of all people with morbid obesity (a body mass index exceeding 40 kg/m2) have elevated carbon dioxide levels in the blood.
When examining groups of people with obstructive sleep apnea, researchers have found that 10–20% of them meet the criteria for OHS as well. The risk of OHS is much higher in those with more severe obesity, i.e. a body mass index (BMI) of 40 kg/m2 or higher. It is twice as common in men compared to women. The average age at diagnosis is 52. American Black people are more likely to be obese than American whites, and are therefore more likely to develop OHS, but obese Asians are more likely than people of other ethnicities to have OHS at a lower BMI as a result of physical characteristics.
It is anticipated that rates of OHS will rise as the prevalence of obesity rises. This may also explain why OHS is more commonly reported in the United States, where obesity is more common than in other countries.
History
The discovery of obesity hypoventilation syndrome is generally attributed to the authors of a 1956 report of a professional poker player who, after gaining weight, became somnolent and fatigued and prone to fall asleep during the day, as well as developing edema of the legs suggesting heart failure. The authors coined the condition "Pickwickian syndrome" after the character Joe from Dickens' The Posthumous Papers of the Pickwick Club (1837), who was markedly obese and tended to fall asleep uncontrollably during the day. This report, however, was preceded by other descriptions of hypoventilation in obesity. In the 1960s, various further discoveries were made that led to the distinction between obstructive sleep apnea and sleep hypoventilation.
The term "Pickwickian syndrome" has fallen out of favor because it does not distinguish obesity hypoventilation syndrome and sleep apnea as separate disorders (which may coexist).
References
Further reading
Medical conditions related to obesity
Sleep disorders
Respiratory diseases
Syndromes affecting the respiratory system | Obesity hypoventilation syndrome | [
"Biology"
] | 3,094 | [
"Behavior",
"Sleep",
"Sleep disorders"
] |
311,440 | https://en.wikipedia.org/wiki/Mammary%20gland | A mammary gland is an exocrine gland in humans and other mammals that produces milk to feed young offspring. Mammals get their name from the Latin word mamma, "breast". The mammary glands are arranged in organs such as the breasts in primates (for example, humans and chimpanzees), the udder in ruminants (for example, cows, goats, sheep, and deer), and the dugs of other animals (for example, dogs and cats). Lactorrhea, the occasional production of milk by the glands, can occur in any mammal, but in most mammals, lactation, the production of enough milk for nursing, occurs only in phenotypic females who have gestated in recent months or years. It is directed by hormonal guidance from sex steroids. In a few mammalian species, male lactation can occur. With humans, male lactation can occur only under specific circumstances.
Mammals are divided into 3 groups: prototherians, metatherians, and eutherians. In the case of prototherians, both males and females have functional mammary glands, but their mammary glands are without nipples. These mammary glands are modified sebaceous glands. Concerning most metatherians and eutherians, only females have functional mammary glands, with the exception of some bat species. Their mammary glands can be termed as breasts or udders. In the case of breasts, each mammary gland has its own nipple (e.g., human mammary glands). In the case of udders, pairs of mammary glands comprise a single mass, with more than one nipple (or teat) hanging from it. For instance, cows and buffalo udders have two pairs of mammary glands and four teats, whereas sheep and goat udders have one pair of mammary glands with two teats protruding from the udder. Each gland produces milk for a single teat. These mammary glands are evolutionarily derived from sweat glands.
Structure
The basic components of a mature mammary gland are the alveoli (hollow cavities, a few millimeters large), which are lined with milk-secreting cuboidal cells and surrounded by myoepithelial cells. These alveoli join to form groups known as lobules. Each lobule has a lactiferous duct that drains into openings in the nipple. The myoepithelial cells contract under the stimulation of oxytocin, excreting the milk secreted by alveolar units into the lobule lumen toward the nipple. As the infant begins to suck, the oxytocin-mediated "let down reflex" ensues, and the mother's milk is secreted—not sucked—from the gland into the infant's mouth.
All the milk-secreting tissue leading to a single lactiferous duct is collectively called a "simple mammary gland"; in a "complex mammary gland", all the simple mammary glands serve one nipple. Humans normally have two complex mammary glands, one in each breast, and each complex mammary gland consists of 10–20 simple glands. The opening of each simple gland on the surface of the nipple is called a "pore." The presence of more than two nipples is known as polythelia and the presence of more than two complex mammary glands as polymastia.
Maintaining the correct polarized morphology of the lactiferous duct tree requires another essential component – mammary epithelial cells extracellular matrix (ECM) which, together with adipocytes, fibroblast, inflammatory cells, and others, constitute mammary stroma. Mammary epithelial ECM mainly contains myoepithelial basement membrane and the connective tissue. They not only help to support mammary basic structure, but also serve as a communicating bridge between mammary epithelia and their local and global environment throughout this organ's development.
Histology
A mammary gland is a specific type of apocrine gland specialized for manufacture of colostrum (first milk) when giving birth. Mammary glands can be identified as apocrine because they exhibit striking "decapitation" secretion. Many sources assert that mammary glands are modified sweat glands.
Development
Mammary glands develop during different growth cycles. They exist in both sexes during the embryonic stage, forming only a rudimentary duct tree at birth. In this stage, mammary gland development depends on systemic (and maternal) hormones, but is also under the (local) regulation of paracrine communication between neighboring epithelial and mesenchymal cells by parathyroid hormone-related protein (PTHrP). This locally secreted factor gives rise to a series of outside-in and inside-out positive feedback between these two types of cells, so that mammary bud epithelial cells can proliferate and sprout down into the mesenchymal layer until they reach the fat pad to begin the first round of branching. At the same time, the embryonic mesenchymal cells around the epithelial bud receive secreting factors activated by PTHrP, such as BMP4. These mesenchymal cells can transform into a dense, mammary-specific mesenchyme, which later develop into connective tissue with fibrous threads, forming blood vessels and the lymph system. A basement membrane, mainly containing laminin and collagen, formed afterward by differentiated myoepithelial cells, keeps the polarity of this primary duct tree. These components of the extracellular matrix are strong determinants of duct morphogenesis.
Biochemistry
Estrogen and growth hormone (GH) are essential for the ductal component of mammary gland development, and act synergistically to mediate it. Neither estrogen nor GH are capable of inducing ductal development without the other. The role of GH in ductal development has been found to be mostly mediated by its induction of the secretion of insulin-like growth factor 1 (IGF-1), which occurs both systemically (mainly originating from the liver) and locally in the mammary fat pad through activation of the growth hormone receptor (GHR). However, GH itself also acts independently of IGF-1 to stimulate ductal development by upregulating estrogen receptor (ER) expression in mammary gland tissue, which is a downstream effect of mammary gland GHR activation. In any case, unlike IGF-1, GH itself is not essential for mammary gland development, and IGF-1 in conjunction with estrogen can induce normal mammary gland development without the presence of GH. In addition to IGF-1, other paracrine growth factors such as epidermal growth factor (EGF), transforming growth factor beta (TGF-β), amphiregulin, fibroblast growth factor (FGF), and hepatocyte growth factor (HGF) are involved in breast development as mediators downstream to sex hormones and GH/IGF-1.
During embryonic development, IGF-1 levels are low, and gradually increase from birth to puberty. At puberty, the levels of GH and IGF-1 reach their highest levels in life and estrogen begins to be secreted in high amounts in females, which is when ductal development mostly takes place. Under the influence of estrogen, stromal and fat tissue surrounding the ductal system in the mammary glands also grows. After puberty, GH and IGF-1 levels progressively decrease, which limits further development until pregnancy, if it occurs. During pregnancy, progesterone and prolactin are essential for mediating lobuloalveolar development in estrogen-primed mammary gland tissue, which occurs in preparation of lactation and nursing.
Androgens such as testosterone inhibit estrogen-mediated mammary gland development (e.g., by reducing local ER expression) through activation of androgen receptors expressed in mammary gland tissue, and in conjunction with relatively low estrogen levels, are the cause of the lack of developed mammary glands in males.
Timeline
Before birth
Mammary gland development is characterized by the unique process by which the epithelium invades the stroma. The development of the mammary gland occurs mainly after birth. During puberty, tubule formation is coupled with branching morphogenesis which establishes the basic arboreal network of ducts emanating from the nipple.
Developmentally, mammary gland epithelium is constantly produced and maintained by rare epithelial cells, dubbed as mammary progenitors which are ultimately thought to be derived from tissue-resident stem cells.
Embryonic mammary gland development can be divided into a series of specific stages. Initially, the formation of the milk lines that run between the fore and hind limbs bilaterally on each side of the midline occurs around embryonic day 10.5 (E10.5). The second stage occurs at E11.5 when placode formation begins along the mammary milk line. This will eventually give rise to the nipple. Lastly, the third stage occurs at E12.5 and involves the invagination of cells within the placode into the mesenchyme, leading to a mammary anlage (biology).
The primitive (stem) cells are detected in embryo and their numbers increase steadily during development
Growth
Postnatally, the mammary ducts elongate into the mammary fat pad. Then, starting around four weeks of age, mammary ductal growth increases significantly with the ducts invading towards the lymph node. Terminal end buds, the highly proliferative structures found at the tips of the invading ducts, expand and increase greatly during this stage. This developmental period is characterized by the emergence of the terminal end buds and lasts until an age of about 7–8 weeks.
By the pubertal stage, the mammary ducts have invaded to the end of the mammary fat pad. At this point, the terminal end buds become less proliferative and decrease in size. Side branches form from the primary ducts and begin to fill the mammary fat pad. Ductal development decreases with the arrival of sexual maturity and undergoes estrous cycles (proestrus, estrus, metestrus, and diestrus). As a result of estrous cycling, the mammary gland undergoes dynamic changes where cells proliferate and then regress in an ordered fashion.
Pregnancy
During pregnancy, the ductal systems undergo rapid proliferation and form alveolar structures within the branches to be used for milk production. After delivery, lactation occurs within the mammary gland; lactation involves the secretion of milk by the luminal cells in the alveoli. Contraction of the myoepithelial cells surrounding the alveoli will cause the milk to be ejected through the ducts and into the nipple for the nursing infant. Upon weaning of the infant, lactation stops and the mammary gland turns in on itself, a process called involution. This process involves the controlled collapse of mammary epithelial cells where cells begin apoptosis in a controlled manner, reverting the mammary gland back to a pubertal state.
Postmenopausal
During postmenopause, due to much lower levels of estrogen, and due to lower levels of GH and IGF-1, which decrease with age, mammary gland tissue atrophies and the mammary glands become smaller.
Physiology
Hormonal control
Lactiferous duct development occurs in females in response to circulating hormones. First development is frequently seen during pre- and postnatal stages, and later during puberty. Estrogen promotes branching differentiation, whereas in males testosterone inhibits it. A mature duct tree reaching the limit of the fat pad of the mammary gland comes into being by bifurcation of duct terminal end buds (TEB), secondary branches sprouting from primary ducts and proper duct lumen formation. These processes are tightly modulated by components of mammary epithelial ECM interacting with systemic hormones and local secreting factors. However, for each mechanism the epithelial cells' "niche" can be delicately unique with different membrane receptor profiles and basement membrane thickness from specific branching area to area, so as to regulate cell growth or differentiation sub-locally. Important players include beta-1 integrin, epidermal growth factor receptor (EGFR), laminin-1/5, collagen-IV, matrix metalloproteinase (MMPs), heparan sulfate proteoglycans, and others. Elevated circulating level of growth hormone and estrogen get to multipotent cap cells on TEB tips through a thin, leaky layer of basement membrane. These hormones promote specific gene expression. Hence cap cells can differentiate into myoepithelial and luminal (duct) epithelial cells, and the increased amount of activated MMPs can degrade surrounding ECM helping duct buds to reach further in the fat pads. On the other hand, basement membrane along the mature mammary ducts is thicker, with strong adhesion to epithelial cells via binding to integrin and non-integrin receptors. When side branches develop, it is a much more "pushing-forward" working process including extending through myoepithelial cells, degrading basement membrane and then invading into a periductal layer of fibrous stromal tissue. Degraded basement membrane fragments (laminin-5) roles to lead the way of mammary epithelial cells migration. Whereas, laminin-1 interacts with non-integrin receptor dystroglycan negatively regulates this side branching process in case of cancer. These complex "Yin-yang" balancing crosstalks between mammary ECM and epithelial cells "instruct" healthy mammary gland development until adult.
There is preliminary evidence that soybean intake mildly stimulates the breast glands in pre- and postmenopausal women.
Pregnancy
Secretory alveoli develop mainly in pregnancy, when rising levels of prolactin, estrogen, and progesterone cause further branching, together with an increase in adipose tissue and a richer blood flow. In gestation, serum progesterone remains at a stably high concentration so signaling through its receptor is continuously activated. As one of the transcribed genes, Wnts secreted from mammary epithelial cells act paracrinely to induce more neighboring cells' branching. When the lactiferous duct tree is almost ready, "leaves" alveoli are differentiated from luminal epithelial cells and added at the end of each branch. In late pregnancy and for the first few days after giving birth, colostrum is secreted. Milk secretion (lactation) begins a few days later due to reduction in circulating progesterone and the presence of another important hormone prolactin, which mediates further alveologenesis, milk protein production, and regulates osmotic balance and tight junction function. Laminin and collagen in myoepithelial basement membrane interacting with beta-1 integrin on epithelial surface again, is essential in this process. Their binding ensures correct placement of prolactin receptors on the basal lateral side of alveoli cells and directional secretion of milk into lactiferous ducts. Suckling of the baby causes release of the hormone oxytocin, which stimulates contraction of the myoepithelial cells. In this combined control from ECM and systemic hormones, milk secretion can be reciprocally amplified so as to provide enough nutrition for the baby.
Weaning
During weaning, decreased prolactin, missing mechanical stimulation (baby suckling), and changes in osmotic balance caused by milk stasis and leaking of tight junctions cause cessation of milk production. It is the (passive) process of a child or animal ceasing to be dependent on the mother for nourishment. In some species there is complete or partial involution of alveolar structures after weaning, in humans there is only partial involution and the level of involution in humans appears to be highly individual. The glands in the breast do secrete fluid also in nonlactating women. In some other species (such as cows), all alveoli and secretory duct structures collapse by programmed cell death (apoptosis) and autophagy for lack of growth promoting factors either from the ECM or circulating hormones. At the same time, apoptosis of blood capillary endothelial cells speeds up the regression of lactation ductal beds. Shrinkage of the mammary duct tree and ECM remodeling by various proteinase is under the control of somatostatin and other growth inhibiting hormones and local factors. This major structural change leads loose fat tissue to fill the empty space afterward. But a functional lactiferous duct tree can be formed again when a female is pregnant again.
Clinical significance
Tumorigenesis in mammary glands can be induced biochemically by abnormal expression level of circulating hormones or local ECM components, or from a mechanical change in the tension of mammary stroma. Under either of the two circumstances, mammary epithelial cells would grow out of control and eventually result in cancer. Almost all instances of breast cancer originate in the lobules or ducts of the mammary glands.
Other mammals
General
The breasts of female humans vary from most other mammals that tend to have less conspicuous mammary glands. The number and positioning of mammary glands varies widely in different mammals. The protruding teats and accompanying glands can be located anywhere along the two milk lines. In general most mammals develop mammary glands in pairs along these lines, with a number approximating the number of young typically birthed at a time. The number of teats varies from 2 (in most primates) to 18 (in pigs). The Virginia opossum has 13, one of the few mammals with an odd number. The following table lists the number and position of teats and glands found in a range of mammals:
Male mammals typically have rudimentary mammary glands and nipples, with a few exceptions: male mice do not have nipples, male marsupials do not have mammary glands, and male horses lack nipples. The male dayak fruit bat has lactating mammary glands. Male lactation occurs infrequently in some species.
Mammary glands are true protein factories, and several labs have constructed transgenic animals, mainly goats and cows, to produce proteins for pharmaceutical use. Complex glycoproteins such as monoclonal antibodies or antithrombin cannot be produced by genetically engineered bacteria, and the production in live mammals is much cheaper than the use of mammalian cell cultures.
Evolution
There are many theories on how mammary glands evolved. For example, it is thought that the mammary gland is a transformed sweat gland, more closely related to apocrine sweat glands. Because mammary glands do not fossilize well, supporting such theories with fossil evidence is difficult. Many of the current theories are based on comparisons between lines of living mammals—monotremes, marsupials, and eutherians. One theory proposes that mammary glands evolved from glands that were used to keep the eggs of early mammals moist and free from infection (monotremes still lay eggs). Other theories suggest that early secretions were used directly by hatched young, or that the secretions were used by young to help them orient to their mothers.
Lactation is thought to have developed long before the evolution of the mammary gland and mammals; see evolution of lactation.
Additional images
See also
Breastfeeding
Mammary tumor
Mammaglobin
Gynecomastia
Hypothalamic–pituitary–prolactin axis
Udder
Witch's milk
Milk line
List of glands of the human body#Skin
List of distinct cell types in the adult human body
References
Bibliography
Moore, Keith L. et al. (2010) Clinically Oriented Anatomy 6th Ed
External links
Comparative Mammary Gland Anatomy by W. L. Hurley
On the anatomy of the breast by Sir Astley Paston Cooper (1840). Numerous drawings, in the public domain.
Breast anatomy
Exocrine system
Glands
Mammal anatomy
Human female endocrine system | Mammary gland | [
"Biology"
] | 4,340 | [
"Exocrine system",
"Organ systems"
] |
311,441 | https://en.wikipedia.org/wiki/Data%20rate | Data rate and data transfer rate can refer to several related and overlapping concepts in communications networks:
Achieved rate
Bit rate, the number of bits that are conveyed or processed per unit of time
Data signaling rate or gross bit rate, a bit rate that includes protocol overhead
Symbol rate or baud rate, the number of symbol changes, waveform changes, or signaling events across the transmission medium per unit of time
Data-rate units, measures of the bit rate or baud rate of a link
Data transfer rate (disk drive), a data rate specific to disk drive operations
Throughput, the rate of successful message delivery, or level of bandwidth consumption
Transfers per second
Capacity
Bandwidth (computing), the maximum rate of data transfer across a given path
Channel capacity, an information-theoretic upper bound on the rate at which data can be reliably transmitted, given noise on a channel
Temporal rates
Broad-concept articles | Data rate | [
"Physics"
] | 183 | [
"Temporal quantities",
"Temporal rates",
"Physical quantities"
] |
311,507 | https://en.wikipedia.org/wiki/Opodeldoc | Opodeldoc is a medical plaster or liniment invented, or at least named, by the German Renaissance physician Paracelsus in the 1500s. In modern form opodeldoc is a mixture of soap in alcohol, to which camphor and sometimes a number of herbal essences, most notably wormwood, are added.
Origins
In his Bertheonea Sive Chirurgia Minor published in 1603, Paracelsus mentioned "oppodeltoch" twice, but with uncertain ingredients.
As to the origin of the name, Kurt Peters speculated that it was coined by Paracelsus from syllables from the words "opoponax, bdellium, and aristolochia." Opoponax is a variety of myrrh; bdellium is Commiphora wightii, which produces a similar resin; and Aristolochia is a widely distributed genus which includes A. pfeiferi, A. rugosa and A. trilobata that are used in folk medicine to cure snakebites. The name suggests that these aromatic plants may have figured in Paracelsus's recipe.
In his Medicina Militaris of 1620, German military physician Raymund Minderer ("Mindererus"; 1570-1621) praised the Paracelsus compound as a plaster, good for wounds. Minderer compared it to his own variant, which set more like sealing wax. Opodeldoc and Paracelsus were acknowledged in English no later than 1646, in Sir Thomas Browne's popular and influential Pseudodoxia Epidemica.
Paracelsus's recipe is completely unrelated to later preparations of the same name. By the second printing of the Edinburgh Pharmacopoeia in 1722 the name applied to a soap-based liniment. Such a liniment in patent form, sold by John Newbery's company in Great Britain "ever since A.D. 1786", was called "Dr. Steer's Opodeldoc". Produced for decades, the "Dr. Steer" preparation had been successfully imported into the U.S., and was common enough there to rank as one of the eight patent medicines to be analyzed (although not condemned) by the Philadelphia College of Pharmacy in 1824.
The name Old Opodeldoc was formerly used as a standard name for a stock character who was a physician, especially when played as a comic figure. Edgar Allan Poe used "Oppodeldoc" as a pseudonym for a character in the short story "The Literary Life of Thingum Bob, Esq."
Modern usage
The Pharmacopoeia of the United States (U.S.P.) gives a recipe for opodeldoc that contains:
Powdered soap, 60 grams;
Camphor, 45 grams;
Oil of rosemary, 10 milliliters;
Alcohol, 700 milliliters;
Water, enough to make 1000 milliliters
As late as the early 1990s 'Epideldoc' (sic) was compounded on request by several pharmacists in the Northwest of England.
References
Ointments | Opodeldoc | [
"Chemistry"
] | 634 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
311,509 | https://en.wikipedia.org/wiki/Bounded%20function | In mathematics, a function defined on some set with real or complex values is called bounded if the set of its values is bounded. In other words, there exists a real number such that
for all in . A function that is not bounded is said to be unbounded.
If is real-valued and for all in , then the function is said to be bounded (from) above by . If for all in , then the function is said to be bounded (from) below by . A real-valued function is bounded if and only if it is bounded from above and below.
An important special case is a bounded sequence, where is taken to be the set of natural numbers. Thus a sequence is bounded if there exists a real number such that
for every natural number . The set of all bounded sequences forms the sequence space .
The definition of boundedness can be generalized to functions taking values in a more general space by requiring that the image is a bounded set in .
Related notions
Weaker than boundedness is local boundedness. A family of bounded functions may be uniformly bounded.
A bounded operator is not a bounded function in the sense of this page's definition (unless ), but has the weaker property of preserving boundedness; bounded sets are mapped to bounded sets . This definition can be extended to any function if and allow for the concept of a bounded set. Boundedness can also be determined by looking at a graph.
Examples
The sine function is bounded since for all .
The function , defined for all real except for −1 and 1, is unbounded. As approaches −1 or 1, the values of this function get larger in magnitude. This function can be made bounded if one restricts its domain to be, for example, or .
The function , defined for all real , is bounded, since for all .
The inverse trigonometric function arctangent defined as: or is increasing for all real numbers and bounded with radians
By the boundedness theorem, every continuous function on a closed interval, such as , is bounded. More generally, any continuous function from a compact space into a metric space is bounded.
All complex-valued functions which are entire are either unbounded or constant as a consequence of Liouville's theorem. In particular, the complex must be unbounded since it is entire.
The function which takes the value 0 for rational number and 1 for irrational number (cf. Dirichlet function) is bounded. Thus, a function does not need to be "nice" in order to be bounded. The set of all bounded functions defined on is much larger than the set of continuous functions on that interval. Moreover, continuous functions need not be bounded; for example, the functions and defined by and are both continuous, but neither is bounded. (However, a continuous function must be bounded if its domain is both closed and bounded.)
See also
Bounded set
Compact support
Local boundedness
Uniform boundedness
References
Complex analysis
Real analysis
Types of functions | Bounded function | [
"Mathematics"
] | 603 | [
"Mathematical objects",
"Functions and mappings",
"Types of functions",
"Mathematical relations"
] |
311,518 | https://en.wikipedia.org/wiki/Ecash | Ecash was conceived by David Chaum as an anonymous cryptographic electronic money or electronic cash system in 1982. It was realized through his corporation Digicash and used as micropayment system at one US bank from 1995 to 1998.
Design
Chaum published the idea of anonymous electronic money in a 1983 paper; eCash software on the user's local computer stored money in a digital format, cryptographically signed by a bank. The user could spend the digital money at any shop accepting eCash, without having to open an account with the vendor first, or transmitting credit card numbers. Security was ensured by public key digital signature schemes. The RSA blind signatures achieved unlinkability between withdrawal and spend transactions. Depending on the payment transactions, one distinguishes between on-line and off-line electronic cash: If the payee has to contact a third party (e.g., the bank or the credit-card company acting as an acquirer) before accepting a payment, the system is called an on-line system. In 1990, Chaum together with Moni Naor proposed the first off-line e-cash system, which was also based on blind signatures.
History
Chaum started the company DigiCash in 1989 with "ecash" as its trademark. He raised $10 million from David Marquardt and by 1997 Nicholas Negroponte was its chairman. Yet, in the United States, only one bank the Mark Twain bank in Saint Louis, MO implemented ecash, testing it as micropayment system; Similar to credit cards, the system was free to purchasers, while merchants paid a transaction fee. After a three-year trial that signed up merely 5,000 customers, the system was dissolved in 1998, one year after the bank had been purchased by Mercantile Bank, a large issuer of credit cards. David Chaum opined then “As the Web grew, the average level of sophistication of users dropped. It was hard to explain the importance of privacy to them”.
In Europe, with fewer credit cards and more cash transactions, micropayment technologies made more sense. In June 1998, ecash became available through Credit Suisse in Switzerland, was available from Deutsche Bank in Germany, Bank Austria, Sweden's Posten AB, and Den norske Bank of Norway, while in Japan Nomura Research Institute marketed eCash to financial institutions.
In Australia, ecash was implemented by St.George Bank and Advance Bank, but transactions were not free to purchasers. In Finland Merita Bank/EUnet made ecash available.
DigiCash went bankrupt in 1998, despite flourishing electronic commerce, but with credit cards as the "currency of choice".
DigiCash was sold to eCash Technologies, including its eCash patents.
In 2000 eCash Technologies sued eCash.com, alleging trademark infringement and unfair competition. eCash.com counterclaimed that eCash Technologies' trademark registration was fraudulently obtained, because it failed to disclose eCash.com's registration of the "ecash.com" domain name to the U.S. Patent and Trademark Office. The court rejected eCash.com's counterclaim saying a trademark applicant must disclose a third party's rights only if they are "clearly established." The court argued because the "mere registration of a domain name does not confer trademark rights, let alone "clearly established" rights, ECash Technologies had no duty to disclose defendant's registration of the “ecash.com” domain name to the PTO, however eCash Technologies subsequently went bankrupt and the domain "Ecash.com" remained in possession of the original owner.
In 2002 eCash Technologies was acquired by InfoSpace, currently known as Blucora. As of 2015, the term eCash is used for the digital cash that can be stored on an electronically sensitive card including online or alternative payment portals and mobile applications.
See also
E-commerce
References
Literature
Schneier, Bruce. Applied Cryptography, Second Edition, John Wiley & Sons, 1996. (Chapter 6.4)
Richard A. Mollin: RSA and Public-key Cryptography. p. 143-148. 2002, , .
Goldwasser, S. and Bellare, M. "Lecture Notes on Cryptography" . Summer course on cryptography, MIT, 1996-2001. pp. 233.
External links
"Detecting Double-Spending" -(Hal Finney's introduction to Chaumian digital cash)
"eCash.com" -(eCash.com)
Cryptographic protocols
Digital currencies
Financial cryptography
Payment systems | Ecash | [
"Engineering"
] | 941 | [
"Financial cryptography",
"Cybersecurity engineering"
] |
311,544 | https://en.wikipedia.org/wiki/Kenneth%20Appel | Kenneth Ira Appel (October 8, 1932 – April 19, 2013) was an American mathematician who in 1976, with colleague Wolfgang Haken at the University of Illinois at Urbana–Champaign, solved the four-color theorem, one of the most famous problems in mathematics. They proved that any two-dimensional map, with certain limitations, can be filled in with four colors without any adjacent "countries" sharing the same color.
Biography
Appel was born in Brooklyn, New York, on October 8, 1932. He grew up in Queens, New York, and was the son of a Jewish couple, Irwin Appel and Lillian Sender Appel. He worked as an actuary for a brief time and then served in the U.S. Army for two years at Fort Benning, Georgia, and in Baumholder, Germany. In 1959, he finished his doctoral program at the University of Michigan, and he also married Carole S. Stein in Philadelphia. The couple moved to Princeton, New Jersey, where Appel worked for the Institute for Defense Analyses from 1959 to 1961. His main work at the Institute for Defense Analyses was doing research in cryptography. Toward the end of his life, in 2012, he was elected a Fellow of the American Mathematical Society. He died in Dover, New Hampshire, on April 19, 2013, after being diagnosed with esophageal cancer in October 2012.
Kenneth Appel was also the treasurer of the Strafford County Democratic Committee. He played tennis through his early 50s. He was a lifelong stamp collector, a player of the game of Go and a baker of bread. He and Carole had two sons, Andrew W. Appel, a noted computer scientist, and Peter H. Appel, and a daughter, Laurel F. Appel, who died on March 4, 2013. He was also a member of the Dover school board from 2010 until his death.
Schooling and teaching
Kenneth Appel received his bachelor's degree from Queens College in 1953. After serving the army he attended the University of Michigan where he earned his M.A. in 1956, and then later his Ph.D. in 1959. Roger Lyndon, his doctoral advisor, was a mathematician whose main mathematical focus was in group theory.
After working for the Institute for Defense Analyses, in 1961 Appel joined the Mathematics Department faculty at the University of Illinois as an Assistant Professor. While there Appel researched in group theory and computability theory. In 1967 he became an associate professor and in 1977 was promoted to professor. It was while he was at this university that he and Wolfgang Haken proved the four color theorem. From their work and proof of this theorem they were later awarded the Delbert Ray Fulkerson prize, in 1979, by the American Mathematical Society and the Mathematical Programming Society.
While at the University of Illinois Appel took on five students during their doctoral program. Each student helped contribute to the work cited on the Mathematics Genealogy Project.
In 1993 Appel moved to New Hampshire as Chairman of the Mathematics Department at the University of New Hampshire. In 2003 he retired as professor emeritus. During his retirement he volunteered in mathematics enrichment programs in Dover and in southern Maine public schools. He believed "that students should be afforded the opportunity to study mathematics at the level of their ability, even if it is well above their grade level."
Contributions to mathematics
The four color theorem
Kenneth Appel is known for his work in topology, the branch of mathematics that explores certain properties of geometric figures. His biggest accomplishment was proving the four color theorem in 1976 with Wolfgang Haken. The New York Times wrote in 1976:
Now the four-color conjecture has been proved by two University of Illinois mathematicians, Kenneth Appel and Wolfgang Haken. They had an invaluable tool that earlier mathematicians lacked—modern computers. Their present proof rests in part on 1,200 hours of computer calculation during which about ten billion logical decisions had to be made. The proof of the four-color conjecture is unlikely to be of applied significance. Nevertheless, what has been accomplished is a major intellectual feat. It gives us an important new insight into the nature of two-dimensional space and of the ways in which such space can be broken into discrete portions.
At first, many mathematicians were unhappy with the fact that Appel and Haken were using computers, since this was new at the time, and even Appel said, "Most mathematicians, even as late as the 1970s, had no real interest in learning about computers. It was almost as if those of us who enjoyed playing with computers were doing something non-mathematical or suspect." The actual proof was described in an article as long as a typical book titled Every Planar Map is Four Colorable, Contemporary Mathematics, vol. 98, American Mathematical Society, 1989.
The proof has been one of the most controversial of modern mathematics because of its heavy dependence on computer number-crunching to sort through possibilities, which drew criticism from many in the mathematical community for its inelegance: "a good mathematical proof is like a poem—this is a telephone directory!" Appel and Haken agreed in a 1977 interview that it was not "elegant, concise, and completely comprehensible by a human mathematical mind".
Nevertheless, the proof was the start of a change in mathematicians' attitudes toward computers—which they had largely disdained as a tool for engineers rather than for theoreticians—leading to the creation of what is sometimes called experimental mathematics.
Group theory
Kenneth Appel's other publications include an article with P.E. Schupp titled Artin Groups and Infinite Coxeter Groups. In this article Appel and Schupp introduced four theorems that are true about Coxeter groups and then proved them to be true for Artin groups. The proofs of these four theorems used the "results and methods of small cancellation theory."
References
External links
Kenneth I. Appel Biography
Author profile in the database zbMATH
1932 births
University of Michigan alumni
20th-century American mathematicians
21st-century American mathematicians
Graph theorists
University of Illinois Urbana-Champaign faculty
Fellows of the American Mathematical Society
2013 deaths
Mathematicians from New York (state)
Scientists from Brooklyn | Kenneth Appel | [
"Mathematics"
] | 1,254 | [
"Mathematical relations",
"Graph theory",
"Graph theorists"
] |
311,596 | https://en.wikipedia.org/wiki/Allicin | Allicin is an organosulfur compound obtained from garlic and leeks. When fresh garlic is chopped or crushed, the enzyme alliinase converts alliin into allicin, which is responsible for the aroma of fresh garlic. Allicin is unstable and quickly changes into a series of other sulfur-containing compounds such as diallyl disulfide. Allicin is an antifeedant, i.e. the defense mechanism against attacks by pests on the garlic plant.
Allicin is an oily, slightly yellow liquid that gives garlic its distinctive odor. It is a thioester of sulfenic acid. It is also known as allyl thiosulfinate. Its biological activity can be attributed to both its antioxidant activity and its reaction with thiol-containing proteins.
Structure and occurrence
Allicin features the thiosulfinate functional group, R-S-(O)-S-R. The compound is not present in garlic unless tissue damage occurs, and is formed by the action of the enzyme alliinase on alliin. Allicin is chiral but occurs naturally only as a racemate. The racemic form can also be generated by oxidation of diallyl disulfide:
(SCH2CH=CH2)2 + 2 RCO3H + H2O → 2 CH2=CHCH2SOH + 2 RCO2H
2 CH2=CHCH2SOH → CH2=CHCH2S(O)SCH2CH=CH2 + H2O
Alliinase is irreversibly deactivated below pH 3; as such, allicin is generally not produced in the body from the consumption of fresh or powdered garlic. Furthermore, allicin can be unstable, breaking down within 16 hours at 23 °C.
Biosynthesis
The biosynthesis of allicin commences with the conversion of cysteine into S-allyl-L-cysteine. Oxidation of this thioether gives the sulfoxide (alliin). The enzyme alliinase, which contains pyridoxal phosphate (PLP), cleaves alliin, generating allylsulfenic acid (CH2=CHCH2SOH), pyruvate, and ammonium ions. At room temperature, two molecules of allylsulfenic acid condense to form allicin.
Research
Allicin has been studied for its potential to treat various kinds of multiple drug resistance bacterial infections, as well as viral and fungal infections in vitro, but as of 2016, the safety and efficacy of allicin to treat infections in people was unclear.
A Cochrane review found there to be insufficient clinical evidence regarding the effects of allicin in preventing or treating common cold.
History
It was first isolated and studied in the laboratory by Chester J. Cavallito and John Hays Bailey in 1944.
Allicin was discovered as part of efforts to create thiamine derivatives in the 1940s, mainly in Japan. Allicin became a model for medicinal chemistry efforts to create other thiamine disulfides. The results included sulbutiamine, fursultiamine (thiamine tetrahydrofurfuryl disulfide) and benfothiamine. These compounds are hydrophobic, easily pass from the intestines to the bloodstream, and are reduced to thiamine by cysteine or glutathione.
See also
Allyl isothiocyanate, the active piquant chemical in mustard, radishes, horseradish and wasabi
syn-Propanethial-S-oxide, the lachrymatory chemical found in onions
List of phytochemicals in food
References
Thiosulfinates
Anti-inflammatory agents
Antibiotics
Dietary antioxidants
Pungent flavors
Allium
Garlic
Antifungals
Allyl compounds
Transient receptor potential channel modulators | Allicin | [
"Chemistry",
"Biology"
] | 820 | [
"Biotechnology products",
"Functional groups",
"Antibiotics",
"Biocides",
"Thiosulfinates"
] |
311,632 | https://en.wikipedia.org/wiki/Video%20game%20programmer | A game programmer is a software engineer, programmer, or computer scientist who primarily develops codebases for video games or related software, such as game development tools. Game programming has many specialized disciplines, all of which fall under the umbrella term of "game programmer". A game programmer should not be confused with a game designer, who works on game design.
History
In the early days of video games (from the early 1970s to mid-1980s), a game programmer also took on the job of a designer and artist. This was generally because the abilities of early computers were so limited that having specialized personnel for each function was unnecessary. Game concepts were generally light and games were only meant to be played for a few minutes at a time, but more importantly, art content and variations in gameplay were constrained by computers' limited power.
Later, as specialized arcade hardware and home systems became more powerful, game developers could develop deeper storylines and could include such features as high-resolution and full color graphics, physics, advanced artificial intelligence and digital sound. Technology has advanced to such a great degree that contemporary games usually boast 3D graphics and full motion video using assets developed by professional graphic artists. Nowadays, the derogatory term "programmer art" has come to imply the kind of bright colors and blocky design that were typical of early video games.
The desire for adding more depth and assets to games necessitated a division of labor. Initially, art production was relegated to full-time artists. Next game programming became a separate discipline from game design. Now, only some games, such as the puzzle game Bejeweled, are simple enough to require just one full-time programmer. Despite this division, however, most game developers (artists, programmers and even producers) have some say in the final design of contemporary games.
Disciplines
A contemporary video game may include advanced physics, artificial intelligence, 3D graphics, digitised sound, an original musical score, complex strategy and may use several input devices (such as mice, keyboards, gamepads and joysticks) and may be playable against other people via the Internet or over a LAN. Each aspect of the game can consume all of one programmer's time and, in many cases, several programmers. Some programmers may specialize in one area of game programming, but many are familiar with several aspects. The number of programmers needed for each feature depends somewhat on programmers' skills, but mostly are dictated by the type of game being developed.
Game engine programmer
Game engine programmers create the base engine of the game, including the simulated physics and graphics disciplines. Increasingly, video games use existing game engines, either commercial, open source or free. They are often customized for a particular game, and these programmers handle these modifications.
Physics engine programmer
A game's physics programmer is dedicated to developing the physics a game will employ. Typically, a game will only simulate a few aspects of real-world physics. For example, a space game may need simulated gravity, but would not have any need for simulating water viscosity.
Since processing cycles are always at a premium, physics programmers may employ "shortcuts" that are computationally inexpensive, but look and act "good enough" for the game in question. In other cases, unrealistic physics are employed to allow easier gameplay or for dramatic effect. Sometimes, a specific subset of situations is specified and the physical outcome of such situations are stored in a record of some sort and are never computed at runtime at all.
Some physics programmers may even delve into the difficult tasks of inverse kinematics and other motions attributed to game characters, but increasingly these motions are assigned via motion capture libraries so as not to overload the CPU with complex calculations.
Graphics engine programmer
Historically, this title usually belonged to a programmer who developed specialized blitter algorithms and clever optimizations for 2D graphics. Today, however, it is almost exclusively applied to programmers who specialize in developing and modifying complex 3D graphic renderers. Some 2D graphics skills have just recently become useful again, though, for developing games for the new generation of cell phones and handheld game consoles.
A 3D graphics programmer must have a firm grasp of advanced mathematical concepts such as vector and matrix math, quaternions and linear algebra.
Skilled programmers specializing in this area of game development can demand high wages and are usually a scarce commodity. Their skills can be used for video games on any platform.
Artificial intelligence programmer
An AI programmer develops the logic of time to simulate intelligence in enemies and opponents. It has recently evolved into a specialized discipline, as these tasks used to be implemented by programmers who specialized in other areas. An AI programmer may program pathfinding, strategy and enemy tactic systems. This is one of the most challenging aspects of game programming and its sophistication is developing rapidly. Contemporary games dedicate approximately 10 to 20 percent of their programming staff to AI.
Some games, such as strategy games like Civilization III or role-playing video games such as The Elder Scrolls IV: Oblivion, use AI heavily, while others, such as puzzle games, use it sparingly or not at all. Many game developers have created entire languages that can be used to program their own AI for games via scripts. These languages are typically less technical than the language used to implement the game, and will often be used by the game or level designers to implement the world of the game. Many studios also make their games' scripting available to players, and it is often used extensively by third party mod developers.
The AI technology used in games programming should not be confused with academic AI programming and research. Although both areas do borrow from each other, they are usually considered distinct disciplines, though there are exceptions. For example, the 2001 game by Lionhead Studios Black & White features a unique AI approach to a user controlled creature who uses learning to model behaviors during game-play. In recent years, more effort has been directed towards intervening promising fields of AI research and game AI programming.
Sound programmer
Not always a separate discipline, sound programming has been a mainstay of game programming since the days of Pong. Most games make use of audio, and many have a full musical score. Computer audio games eschew graphics altogether and use sound as their primary feedback mechanism.
Many games use advanced techniques such as 3D positional sound, making audio programming a non-trivial matter. With these games, one or two programmers may dedicate all their time to building and refining the game's sound engine, and sound programmers may be trained or have a formal background in digital signal processing.
Scripting tools are often created or maintained by sound programmers for use by sound designers. These tools allow designers to associate sounds with characters, actions, objects and events while also assigning music or atmospheric sounds for game environments (levels or areas) and setting environmental variables such as reverberation.
Gameplay programmer
Though all programmers add to the content and experience that a game provides, a gameplay programmer focuses more on a game's strategy, implementation of the game's mechanics and logic, and the "feel" of a game. This is usually not a separate discipline, as what this programmer does usually differs from game to game, and they will inevitably be involved with more specialized areas of the game's development such as graphics or sound.
This programmer may implement strategy tables, tweak input code, or adjust other factors that alter the game. Many of these aspects may be altered by programmers who specialize in these areas, however (for example, strategy tables may be implemented by AI programmers).
Scripter
In early video games, gameplay programmers would write code to create all the content in the game—if the player was supposed to shoot a particular enemy, and a red key was supposed to appear along with some text on the screen, then this functionality was all written as part of the core program in C or assembly language by a gameplay programmer.
More often today the core game engine is usually separated from gameplay programming. This has several development advantages. The game engine deals with graphics rendering, sound, physics and so on while a scripting language deals with things like cinematic events, enemy behavior and game objectives. Large game projects can have a team of scripters to implement these sorts of game content.
Scripters usually are also game designers. It is often easier to find a qualified game designer who can be taught a script language as opposed to finding a qualified game designer who has mastered C++.
UI programmer
This programmer specializes in programming user interfaces (UIs) for games. Though some games have custom user interfaces, this programmer is more likely to develop a library that can be used across multiple projects. Most UIs look 2D, though contemporary UIs usually use the same 3D technology as the rest of the game so some knowledge of 3D math and systems is helpful for this role. Advanced UI systems may allow scripting and special effects, such as transparency, animation or particle effects for the controls.
Input programmer
Input programming, while usually not a job title, or even a full-time position on a particular game project, is still an important task. This programmer writes the code specifying how input devices such as a keyboard, mouse or joystick affect the game. These routines are typically developed early in production and are continually tweaked during development. Normally, one programmer does not need to dedicate his entire time to developing these systems. A real-time motion-controlled game utilizing devices such as the Wii Remote or Kinect may need a very complex and low latency input system, while the HID requirements of a mouse-driven turn-based strategy game such as Heroes of Might and Magic are significantly simpler to implement.
Network programmer
This programmer writes code that allows players to compete or cooperate, connected via a LAN or the Internet (or in rarer cases, directly connected via modem). Programmers implementing these game features can spend all their time in this one role, which is often considered one of the most technically challenging. Network latency, packet compression, and dropped or interrupted connections are just a few of the concerns one must consider. Although multi-player features can consume the entire production timeline and require the other engine systems to be designed with networking in mind, network systems are often put off until the last few months of development, adding additional difficulties to this role. Some titles have had their online features (often considered lower priority than the core gameplay) cut months away from release due to concerns such as lack of management, design forethought, or scalability. Virtua Fighter 5 for the PS3 is a notable example of this trend.
Game tools programmer
The tools programmer can assist the development of a game by writing custom tools for it. Game development Tools often contain features such as script compilation, importing or converting art assets, and level editing. While some tools used may be COTS products such as an IDE or a graphics editor, tools programmers create tools with specific functions tailored to a specific game which are not available in commercial products. For example, an adventure game developer might need an editor for branching story dialogs, and a sport game developer could use a proprietary editor to manage players and team stats. These tools are usually not available to the consumers who buy the game.
Porting programmer
Porting a game from one platform to another has always been an important activity for game developers. Some programmers specialize in this activity, converting code from one operating system to work on another. Sometimes, the programmer is responsible for making the application work not for just one operating system, but on a variety of devices, such as mobile phones. Often, however, "porting" can involve re-writing the entire game from scratch as proprietary languages, tools or hardware make converting source code a fruitless endeavour.
This programmer must be familiar with both the original and target operating systems and languages (for example, converting a game originally written in C++ to Java), convert assets, such as artwork and sounds or rewrite code for low memory phones. This programmer may also have to side-step buggy language implementations, some with little documentation, refactor code, oversee multiple branches of code, rewrite code to scale for wide variety of screen sizes and implement special operator guidelines. They may also have to fix bugs that were not discovered in the original release of a game.
Technology programmer
The technology programmer is more likely to be found in larger development studios with specific departments dedicated solely to R&D. Unlike other members of the programming team, the technology programmer usually isn't tied to a specific project or type of development for an extended length of time, and they will typically report directly to a CTO or department head rather than a game producer. As the job title implies, this position is extremely demanding from a technical perspective and requires intimate knowledge of the target platform hardware. Tasks cover a broad range of subjects including the practical implementation of algorithms described in research papers, very low-level assembly optimization and the ability to solve challenging issues pertaining to memory requirements and caching issues during the latter stages of a project. There is considerable amount of cross-over between this position and some of the others, particularly the graphics programmer.
Generalist
In smaller teams, one or more programmers will often be described as 'Generalists' who will take on the various other roles as needed. Generalists are often engaged in the task of tracking down bugs and determining which subsystem expertise is required to fix them.
Lead game programmer
The lead programmer is ultimately in charge of all programming for the game. It is their job to make sure the various submodules of the game are being implemented properly and to keep track of development from a programming standpoint. A person in this role usually transitions from other aspects of game programming to this role after several years of experience. Despite the title, this person usually has less time for writing code than other programmers on the project as they are required to attend meetings and interface with the client or other leads on the game. However, the lead programmer is still expected to program at least some of the time and is also expected to be knowledgeable in most technical areas of the game. There is often considerable common ground in the role of technical director and lead programmer, such that the jobs are often covered by one person.
Platforms
Game programmers can specialize on one platform or another, such as the Wii U or Windows. So, in addition to specializing in one game programming discipline, a programmer may also specialize in development on a certain platform. Therefore, one game programmer's title might be "PlayStation 3 3D Graphics Programmer." Some disciplines, such as AI, are transferable to various platforms and needn't be tailored to one system or another. Also, general game development principles such as 3D graphics programming concepts, sound engineering and user interface design are transferable between platforms.
Education
Notably, there are many game programmers with no formal education in the subject, having started out as hobbyists and doing a great deal of programming on their own, for fun, and eventually succeeding because of their aptitude and homegrown experience. However, most job solicitations for game programmers specify a bachelor's degree (in mathematics, physics, computer science, "or equivalent experience").
Increasingly, universities are starting to offer courses and degrees in game programming. Any such degrees have considerable overlap with computer science and software engineering degrees.
Salary
Salaries for game programmers vary from company to company and country to country. In general, however, pay for game programming is generally about the same for comparable jobs in the business sector. This is despite the fact that game programming is some of the most difficult of any type and usually requires longer hours than mainstream programming.
Results of a 2010 survey in the United States indicate that the average salary for a game programmer is USD$95,300 annually. The least experienced programmers, with less than 3 years of experience, make an average annual salary of over $72,000. The most experienced programmers, with more than 6 years of experience, make an average annual salary of over $124,000.
Generally, lead programmers are the most well compensated, though some 3D graphics programmers may challenge or surpass their salaries. According to the same survey above, lead programmers on average earn $127,900 annually.
Job security
Though sales of video games rival other forms of entertainment such as movies, the video game industry is extremely volatile. Game programmers are not insulated from this instability as their employers experience financial difficulty.
Third-party developers, the most common type of video game developers, depend upon a steady influx of funds from the video game publisher. If a milestone or deadline is not met (or for a host of other reasons, like the game is cancelled), funds may become short and the developer may be forced to retrench employees or declare bankruptcy and go out of business. Game programmers who work for large publishers are somewhat insulated from these circumstances, but even the large game publishers can go out of business (as when Hasbro Interactive was sold to Infogrames and several projects were cancelled; or when The 3DO Company went bankrupt in 2003 and ceased all operations). Some game programmers' resumes consist of short stints lasting no more than a year as they are forced to leap from one doomed studio to another. This is why some prefer to consult and are therefore somewhat shielded from the effects of the fates of individual studios.
Languages and tools
Most commercial computer and video games are written primarily in C++, C, and some assembly language. Many games, especially those with complex interactive gameplay mechanics, tax hardware to its limit. As such, highly optimized code is required for these games to run at an acceptable frame rate. Because of this, compiled code is typically used for performance-critical components, such as visual rendering and physics calculations. Almost all PC games also use either the DirectX, OpenGL APIs or some wrapper library to interface with hardware devices.
Various script languages, like Ruby, Lua and Python, are also used for the generation of content such as gameplay and especially AI. Scripts are generally parsed at load time (when the game or level is loaded into main memory) and then executed at runtime (via logic branches or other such mechanisms). They are generally not executed by an interpreter, which would result in much slower execution. Scripts tend to be used selectively, often for AI and high-level game logic. Some games are designed with high dependency on scripts and some scripts are compiled to binary format before game execution. In the optimization phase of development, some script functions will often be rewritten in a compiled language.
Java is used for many web browser based games because it is cross-platform, does not usually require installation by the user, and poses fewer security risks, compared to a downloaded executable program. Java is also a popular language for mobile phone based games. Adobe Flash, which uses the ActionScript language, and JavaScript are popular development tools for browser-based games.
As games have grown in size and complexity, middleware is becoming increasingly popular within the industry. Middleware provides greater and higher level functionality and larger feature sets than the standard lower level APIs such as DirectX and OpenGL, such as skeletal animation. In addition to providing more complex technologies, some middleware also makes reasonable attempts to be platform independent, making common conversions from, for example, Microsoft Windows to PS4 much easier. Essentially, middleware is aimed at cutting out as much of the redundancy in the development cycle as possible (for example, writing new animation systems for each game a studio produces), allowing programmers to focus on new content.
Other tools are also essential to game developers: 2D and 3D packages (for example Blender, GIMP, Photoshop, Maya or 3D Studio Max) enable programmers to view and modify assets generated by artists or other production personnel. Source control systems keep source code safe, secure and optimize merging. IDEs with debuggers (such as Visual Studio) make writing code and tracking down bugs a less painful experience.
See also
List of video game industry people#Programming
Code Monkeys, an animated show about game programmers
Programmer
Game design
Game development tool
Game programming#Tools
Notes
References
External links
Game industry veteran Tom Sloper's advice on game programming
The Programmer at Eurocom (archived 7 November 2007)
01
Computer occupations
Programmer | Video game programmer | [
"Technology"
] | 4,120 | [
"Video game industry occupations",
"Computer occupations"
] |
311,855 | https://en.wikipedia.org/wiki/Autorotation%20%28fixed-wing%20aircraft%29 | For fixed-wing aircraft, autorotation is the tendency of an aircraft in or near a stall to roll spontaneously to the right or left, leading to a spin (a state of continuous autorotation).
Details
When the angle of attack is less than the stalling angle, any increase in angle of attack causes an increase in lift coefficient that causes the wing to rise. As the wing rises the angle of attack and lift coefficient decrease which tend to restore the wing to its original angle of attack. Conversely any decrease in angle of attack causes a decrease in lift coefficient which causes the wing to descend. As the wing descends, the angle of attack and lift coefficient increase which tends to restore the wing to its original angle of attack. For this reason the angle of attack is stable when it is less than the stalling angle. The aircraft displays damping in roll.
When the wing is stalled and the angle of attack is greater than the stalling angle, any increase in angle of attack causes a decrease in lift coefficient that causes the wing to descend. As the wing descends the angle of attack increases, which causes the lift coefficient to decrease and the angle of attack to increase. Conversely any decrease in angle of attack causes an increase in lift coefficient that causes the wing to rise. As the wing rises the angle of attack decreases and causes the lift coefficient to increase further towards the maximum lift coefficient. For this reason the angle of attack is unstable when it is greater than the stalling angle. Any disturbance of the angle of attack on one wing will cause the whole wing to roll spontaneously and continuously.
When the angle of attack on the wing of an aircraft reaches the stalling angle the aircraft is at risk of autorotation. This will eventually develop into a spin if the pilot does not take corrective action.
Autorotation in kites and gliders
Magnus effect rotating kites (wing flipping or wing tumbling) that have the rotation axis bluntly normal to the stream direction use autorotation; a net lift is possible that lifts the kite and payload to altitude. The Rotoplane, the UFO rotating kite, and the Skybow rotating ribbon arch kite use the Magnus effect resulting from the autorotating wing with rotation axis normal to the stream.
Some kites are equipped with autorotation wings.
Again, a third kind of autorotation occurs in self-rotating bols, rotating parachutes, or rotating helical objects sometimes used as kite tails or kite-line laundry. This kind of autorotation drives wind and water propeller-type turbines, sometimes used to generate electricity.
Unlocked engine-off aircraft propellers may autorotate. Such autorotation is being explored for generating electricity to recharge flight-driving batteries.
See also
Airborne wind turbine
Küssner effect
Autorotation (airborne wind energy)
References
Clancy, L.J. (1975), Aerodynamics, Pitman Publishing Limited, London.
Stinton, Darryl (1996), Flying Qualities and Flight Testing of The Aeroplane, Blackwell Science Ltd, Oxford UK.
Notes
Aerodynamics
Emergency aircraft operations
ar:دوران ذاتي
de:Autorotation
fr:Autorotation
it:Autorotazione
nl:Autorotatie
pl:Autorotacja (samolot)
pt:Autorrotação
ru:Авторотация | Autorotation (fixed-wing aircraft) | [
"Chemistry",
"Engineering"
] | 683 | [
"Aerospace engineering",
"Aerodynamics",
"Fluid dynamics"
] |
311,916 | https://en.wikipedia.org/wiki/Binge%20eating | Binge eating is a pattern of disordered eating which consists of episodes of uncontrollable eating. It is a common symptom of eating disorders such as binge eating disorder and bulimia nervosa. During such binges, a person rapidly consumes an excessive quantity of food. A diagnosis of binge eating is associated with feelings of loss of control. Binge eating disorder is also linked with being overweight and obesity.
Diagnosis
The DSM-5 includes a disorder diagnosis criterion for Binge Eating Disorder (BED). It is as follows:
Recurrent and persistent episodes of binge eating
Binge eating episodes are associated with three (or more) of the following:
Eating much more rapidly than normal
Eating until feeling uncomfortably full
Eating large amounts of food when not physically hungry
Eating alone because of being embarrassed by how much one is eating
Feeling disgusted with oneself, depressed, or very guilty after overeating
Marked distress regarding binge eating
Absence of regular compensatory behaviors (such as purging)
Warning signs
Typical warning signs of binge eating disorder include the disappearance of a large quantity of food in a relatively short period of time. A person who may be experiencing binge eating disorder may appear to be uncomfortable when eating around others or in public. A person may develop new and extreme eating patterns that they have never done before. These might include diets that cut out certain food groups completely such as a no dairy or no carb diet. Binge eating can begin after a first attempt at dieting. They might also steal or hoard food in unusual places. A person may be experiencing fluctuations in their weight. In addition, they may have feelings of disgust, depression, or guilt about overeating. Another possible warning sign of binge eating is that a person may be obsessed with their body image or weight.
Furthermore, patients who binge eat may also engage in other self-destructing behaviours like suicide attempts, drug use, shop-lifting, and drinking too much alcohol. The onset of binge eating without dieting is linked to a higher risk of mental health issues and a younger age of onset. BED patients can experience comorbid psychiatric instability.
Causes
There are no direct causes of binge eating; however, long-term dieting, psychological issues and an obsession with body image have been linked to binge eating. There are multiple factors that increase a person's risk of developing binge eating disorder. Family history could play a role if that person had a family member who was affected by binge eating. Said person may not have a supportive or friendly home environment, and they have a hard time expressing their problems with BED. Having a history of going on extreme diets may cause an urge to binge eat. Psychological issues such as feeling negatively about oneself or the way they look may trigger a binge.
Weight stigma has also been found to predict binge eating, highlighting the importance of weight inclusive approaches to binge eating disorder that do not exercerbate this potential cause.
Health risks
There are several physical, emotional, and social health risks when associated with binge eating disorder. These risks include depression, anxiety, and heart disease.
One study found that people with obesity who experience binge eating have a higher body mass index, and higher levels of depression and stress than those who did not have with binge eating disorder Exposure to two major categories of risk factors—those that raise the risk for obesity and those that raise the risk for psychiatric disorders in general—can be associated with binge eating disorder.
Effects
Typically, the eating is done rapidly, and a person will feel emotionally numb and unable to stop eating. Most people who have eating binges try to hide this behavior from others, and often feel ashamed about being overweight or depressed about their overeating. Although people who do not have any eating disorder may occasionally experience episodes of overeating, frequent binge eating is often a symptom of an eating disorder.
BED is characterized by uncontrollable, excessive eating, followed by feelings of shame and guilt. Unlike those with bulimia, those with BED symptoms typically do not purge their food, fast, or excessively exercise to compensate for binges. Additionally, these individuals tend to diet more often, enroll in weight-control programs and have a history of family obesity. However, many who have bulimia also have binge-eating disorder.
Along with the social and physical health that is affected when suffering from BED, there are psychiatric disorders that are often linked to BED. Some of them being but are not limited to:
depression, bipolar disorder, anxiety disorder, substance abuse/use disorder.
Treatments
Current treatments for binge eating disorder mainly consist of psychological therapies, such as Cognitive Behavioural Therapy (CBT), Interpersonal Psychotherapy (IPT), and Dialectical Behavioural Therapy (DBT). A study conducted on the long term efficacy of psychological treatments for binge eating showed that both cognitive behavioral therapy (CBT) and group interpersonal psychotherapy (IPT) effectively treat binge eating disorder, with 64.4% of patients completely recovering from binge eating.
Lisdexamfetamine dimesylate, also known as Vyvanse, is the only medication approved by the Food and Drug Administration (FDA) for the treatment of moderate-to-severe binge eating disorder in adults as of 2024. However, some studies have called into question its effectiveness for this indication.
History
APA DSM
The American Psychiatric Association mentioned and listed binge eating under the listed criteria and features of bulimia in the Diagnostic and Statistical Manual of Mental Disorders (DSM) - 3 in 1987. By including binge eating in the DSM-3, even if not on its own as a separate eating disorder, they brought awareness to the disorder and gave it mental disorder legitimacy. This allowed for people to receive the appropriate treatment for binge eating and for their disorder to be legitimized.
Drug therapy
In January 2015, the Food and Drug Administration (FDA) approved lisdexamfetamine dimesylate (Vyvanse), the first medication indicated for the treatment of moderate-to-severe binge eating disorder.
Men with binge eating
Men with binge eating often face unique barriers to seeking treatment due to socio-cultural expectations surrounding masculinity. After men compare their bodies to the culturally constructed masculine ideals, they often develop heightened concerns about their own body image and internalize the belief that their bodies should be muscular, lean, and strong, developing unhealthy behaviors like binge eating or using fad diets. Many men hesitate to reach out for help out of fear of appearing weak, 'less like a man' or even homosexual. The pervasive stereotype that eating disorders primarily affect women has contributed to feelings of shame and isolation among men who are affected by these disorders. This gender-based stigma surrounding eating disorders and strongly feminine branding of eating disorder treatment centers create a significant barrier to men's willingness to reach out for support. Men are more likely to partake in compulsive or excessive exercising as a compensation to highly calorific diets, leading to body dysmorphia.
See also
Binge drinking
Binge eating disorder
Cognitive behavioral treatment of eating disorders
Counterregulatory eating
Overeating
Polyphagia
Prader-Willi Syndrome
References
External links
Eating behaviors of humans
Hyperalimentation
de:Binge Eating | Binge eating | [
"Biology"
] | 1,530 | [
"Eating behaviors",
"Behavior",
"Human behavior",
"Eating behaviors of humans"
] |
311,991 | https://en.wikipedia.org/wiki/Thames%20Tunnel | The Thames Tunnel is a tunnel beneath the River Thames in London, connecting Rotherhithe and Wapping. It measures wide by high and is long, running at a depth of below the river surface measured at high tide. It is the first tunnel known to have been constructed successfully underneath a navigable river. It was built between 1825 and 1843 by Marc Brunel, and his son, Isambard, using the tunnelling shield newly invented by the elder Brunel and Thomas Cochrane.
The tunnel was originally designed for horse-drawn carriages, but was mainly used by pedestrians and became a tourist attraction. In 1869 it was converted into a railway tunnel for use by the East London line which, since 2010, is part of the London Overground railway network under the ownership of Transport for London.
History and development
Construction
At the start of the 19th century, there was a pressing need for a new land connection between the north and south banks of the Thames to link the expanding docks on each side of the river. The engineer Ralph Dodd tried, but failed, to build a tunnel between Gravesend and Tilbury in 1799.
Between 1805 and 1809, a group of Cornish miners, including Richard Trevithick, tried to dig a tunnel further upriver between Rotherhithe and Wapping/Limehouse, but failed because of the difficult conditions of the ground. The Cornish miners were used to hard rock and did not modify their methods for soft clay and quicksand. This Thames Archway project was abandoned after the initial pilot tunnel (a 'driftway') flooded twice when of a total of had been dug. It only measured by , and was intended as a drain for a larger tunnel for passenger use. The failure of the Thames Archway project led engineers to conclude that "an underground tunnel is impracticable".
The Anglo-French engineer Marc Brunel refused to accept this conclusion. In 1814 he proposed to Emperor Alexander I of Russia a plan to build a tunnel under the river Neva in St Petersburg. This scheme was turned down (a bridge was built instead) and Brunel continued to develop ideas for new methods of tunnelling.
Brunel patented the tunnelling shield, a revolutionary advance in tunnelling technology, in January 1818. In 1823 Brunel produced a plan for a tunnel between Rotherhithe and Wapping, which would be dug using his new shield. Financing was soon found from private investors, including the Duke of Wellington, and a Thames Tunnel Company was formed in 1824, the project beginning in February 1825.
The first step was the construction of a large shaft on the south bank at Rotherhithe, back from the river bank. It was dug by assembling an iron ring in diameter above ground. A brick wall high and thick was built on top of this, with a powerful steam engine surmounting it to drive the excavation's pumps. The whole apparatus was estimated to weigh . The soil below the ring's sharp lower edge was removed manually by Brunel's workers. The whole shaft thus gradually sank under its own weight, slicing through the soft ground like a pastry cutter.
The shaft became stuck at one point during its sinking, as the pressure of the earth around it held it firmly in position. Extra weight was required to make it continue its descent. 50,000 bricks were added as temporary weights. It was realised that the problem was caused because the shaft's sides were parallel. Years later when the Wapping shaft was built, it was slightly wider at the bottom than the top. This non-cylindrical tapering design ensured it did not get stuck. By November 1825 the Rotherhithe shaft was in place and tunnelling work could begin.
The tunnelling shield, built at Henry Maudslay's Lambeth works and assembled in the Rotherhithe shaft, was the key to Brunel's construction of the Thames Tunnel. The Illustrated London News described how it worked:
Each of the twelve frames of the shield weighed over . The key innovation of the tunnelling shield was its support for the unlined ground in front and around it to reduce the risk of collapses. However, many workers, including Brunel himself, soon fell ill from the poor conditions caused by filthy sewage-laden water seeping through from the river above. This sewage gave off methane gas which was ignited by the miners' oil lamps. When the resident engineer, John Armstrong, fell ill in April 1826, Marc's son Isambard Kingdom Brunel took over at the age of 20.
Work was slow, progressing at only a week. To earn income from the tunnel, the company directors allowed sightseers to view the shield in operation. They charged a shilling for the adventure and an estimated 600–800 visitors took advantage of the opportunity every day.
The excavation was hazardous. The tunnel flooded suddenly on 18 May 1827 after had been dug. Isambard Kingdom Brunel lowered a diving bell from a boat to repair the hole at the bottom of the river, throwing bags filled with clay into the breach in the tunnel's roof. Following the repairs and the drainage of the tunnel, he held a banquet inside it.
Closure
Six men died when the tunnel flooded again the following year, on 12 January 1828, just four days after a visit by Don Miguel, soon to become Regent of Portugal. Isambard himself was extremely lucky to survive the flooding. The six men had made their way to the main stairwell, as the emergency exit was known to be locked. Isambard instead made for the locked exit. A contractor named Beamish heard him there and broke the door down, and an unconscious Isambard was pulled out and revived. He was sent to Brislington, near Bristol, to recuperate. There he heard about the competition to build what became the Clifton Suspension Bridge.
Completion
In December 1834 Marc Brunel succeeded in raising enough money, including a loan of £247,000 from the Treasury, to continue construction.
Starting in August 1835 the old rusted shield was dismantled and removed. By March 1836 the new shield, improved and heavier, was assembled in place and boring resumed.
In 1835, the Italian poet Giacomo Leopardi parodied the construction of the Thames Tunnel in lines 126–129 of the poem .
Impeded by further floods (23 August and 3 November 1837, 20 March 1838, 3 April 1840) fires and leaks of methane and hydrogen sulphide gas, the remainder of the tunnelling was completed in November 1841, after another five and a half years. The extensive delays and repeated flooding made the tunnel the butt of metropolitan humour:
The Thames Tunnel was fitted out with lighting, roadways and spiral staircases during 1841–1842. An engine house on the Rotherhithe side, which now houses the Brunel Museum, was also constructed to house machinery for draining the tunnel. The tunnel was finally opened to the public on 25 March 1843.
Pedestrian usage
Although it was a triumph of civil engineering, the Thames Tunnel was not a financial success. It had cost £454,000 to dig and another £180,000 to fit out – far exceeding its initial cost estimates. Proposals to extend the entrance to accommodate wheeled vehicles failed owing to cost, and it was used only by pedestrians. It became a major tourist attraction, attracting about two million people a year, each paying a penny to pass through, and became the subject of popular songs. The American traveller William Allen Drew commented that "No one goes to London without visiting the Tunnel" and described it as the "eighth wonder of the world". When he saw it for himself in 1851, he pronounced himself "somewhat disappointed in it" but still left a vivid description of its interior, which was more like an underground marketplace than a transport artery:
Other opinions of the tunnel were more negative; some regarded it as the haunt of prostitutes and "tunnel thieves" who lurked under its arches and mugged passers-by. The American writer Nathaniel Hawthorne visited it a few years after Drew, and wrote in 1855 that the tunnel:
Conversion into a railway tunnel
The tunnel was purchased in September 1865 at a cost of £800,000 (equivalent to £ million in ) by the East London Railway Company, a consortium of six mainline railways which sought to use the tunnel to provide a rail link for goods and passengers between Wapping (and later Liverpool Street) and the South London line. The tunnel's generous headroom, resulting from the architects' original intention of accommodating horse-drawn carriages, also provided a sufficient loading gauge for trains.
The line's engineer was Sir John Hawkshaw who was also noted, with W. H. Barlow, for the major re-design and completion of Isambard Brunel's long-abandoned Clifton Suspension Bridge at Bristol, which was completed in 1864.
The first train ran through the tunnel on 7 December 1869. In 1884, the tunnel's disused construction shaft to the north of the river was repurposed to serve as Wapping station.
The East London Railway was later absorbed into the London Underground, where it became the East London line. It continued to be used for goods services as late as 1962. During the Underground days, the Thames Tunnel was the oldest underground piece of the Tube's infrastructure.
It was planned to construct a junction between the East London Line and the Jubilee Line extension at Canada Water station. As construction would require the temporary closure of the East London Line, it was decided to take this opportunity to perform long-term maintenance on the tunnel and so in 1995 the East London Line was closed to allow construction and maintenance to take place. The proposed repair method for the tunnel was to seal it against leaks by "shotcreting" it with concrete, obliterating its original appearance, causing a controversy that led to a bitter conflict between London Underground, who wished to complete the work as quickly and cheaply as possible, and architectural interests wishing to preserve the tunnel's appearance. The architectural interests won, with the Grade II* listing of the tunnel on 24 March 1995, the day London Underground had scheduled the start of the long-term maintenance work.
Following an agreement to leave a short section at one end of the tunnel untreated, and more sympathetic treatment of the rest of the tunnel, the work went ahead and the route reopened – much later than originally anticipated – in 1998. The tunnel closed again from 23 December 2007 to permit tracklaying and resignalling for the East London Line extension. The extension work resulted in the tunnel becoming part of the new London Overground. After its reopening on 27 April 2010, it was used by mainline trains again.
Influence
The construction of the Thames Tunnel showed that it was indeed possible to build underwater tunnels, despite the previous scepticism of many engineers. Several new underwater tunnels were built in the UK in the following decades: the Tower Subway in London; the Severn Tunnel under the River Severn; and the Mersey Railway Tunnel under the River Mersey. Brunel's tunnelling shield was later refined, with James Henry Greathead playing a particularly important role in developing the technology.
In 1991, the Thames Tunnel was designated as an International Historic Civil Engineering Landmark by the American Society of Civil Engineers and the Institution of Civil Engineers.
In 1995 the tunnel was listed at Grade II* in recognition of its architectural importance.
Visiting
Nearby in Rotherhithe, Brunel's engine house (built to house drainage pumps) is open to visitors as the Brunel Museum.
In the 1860s, when trains started running through the tunnel, the entrance shaft at Rotherhithe was used for ventilation. The staircase was removed to reduce the risk of fire. In 2011, a concrete raft was built near the bottom of the shaft, above the tracks, when the tunnel was upgraded for the London Overground network. This space, with walls blackened with smoke from steam trains, forms part of the museum and functions at times as a concert venue and occasional bar. A rooftop garden has been built on top of the shaft. In 2016 the entrance hall opened as an exhibition space, with a staircase providing access to the shaft for the first time in over 150 years.
See also
List of crossings of the River Thames
Tunnels underneath the River Thames
Notes
References
External links
"Brief history during the Snow era" UCLA School of Public Health
The Brunel Museum – Based in Rotherhithe, London, the museum is housed in the building that contained the pumps to keep the Thames Tunnel dry
Brunel's Thames Tunnel BBC News – Slideshow of Thames Tunnel images
London's Oldest Underwater Tunnel – slideshow by Life magazine
The Thames Tunnel: a tunnel book Flickr, 23 May 2006 – Photos of a promotional book commemorating the opening of the tunnel
Thames Tunnel Brunel portal
Thames Tunnel photoset Flickr, 12–13 March 2010
Photos of the East London Line and Thames Tunnel while still London Underground
Thames Tunnel: Rare access to 'eighth wonder of world' – BBC News (26 May 2014) – A brief 'potted history' (a 2-minute video filmed in the tunnel)
Thames Tunnel Company (1836) An explanation of the works of the tunnel under the Thames from Rotherhithe to Wapping – digital facsimile from the Linda Hall Library
Tunnels completed in 1843
Railway tunnels in London
Transport in the London Borough of Southwark
Transport in the London Borough of Tower Hamlets
Tunnels underneath the River Thames
Works of Isambard Kingdom Brunel
London Overground
Grade II* listed tunnels
Historic Civil Engineering Landmarks
Rotherhithe
Wapping
1843 establishments in England
Grade II* listed buildings in the London Borough of Southwark
Grade II* listed buildings in the London Borough of Tower Hamlets
Pedestrian tunnels in the United Kingdom | Thames Tunnel | [
"Engineering"
] | 2,794 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
312,028 | https://en.wikipedia.org/wiki/Packet%20Switch%20Stream | Packet Switch Stream (PSS) was a public data network in the United Kingdom, provided by British Telecommunications (BT). It operated from the late 1970s through to the mid-2000s.
Research, development and implementation
EPSS
Roger Scantlebury was seconded from the National Physical Laboratory to the British Post Office Telecommunications division (BPO-T) in 1969. He had worked with Donald Davies in the late 1960s pioneering the implementation of packet switching and the associated communication protocols on the local-area NPL network. By 1973, BPO-T engineers had developed a packet-switching communication protocol from basic principles for an Experimental Packet Switched Service (EPSS) based on a virtual call capability. However, the protocols were complex and limited; Donald Davies described them as "esoteric".
Ferranti supplied the hardware and software. The handling of link control messages (acknowledgements and flow control) was different from that of most other networks. The EPSS began operating in 1976, the first public data network in the UK. EPSS was interconnected with SATNET and the NPL network.
IPSS
The International Packet Switch Stream (IPSS) was an international network service, based on the X.25 standard, launched by the international division of BT. This venture was driven by the high demand for affordable access to US-based database and other network services. A service was provided by IPSS to this market, which started operation in 1978. IPSS was later linked to PSS and other packet switched networks around the world using gateways based on the X.75 standard.
IPSS was interconnected with SATNET. JANET Connections were available via IPSS.
PSS
A period of pre-operational testing with customers, mainly UK universities and computer manufacturers, began in 1980. Packet Switch Stream launched as a commercial service on 20 August 1981 based on X.25/X.75. The experimental predecessor network (EPSS) formally closed down on 31 July 1981 after all the existing connections had been moved to PSS.
The network was initially based upon a dedicated modular packet switch using DCC's TP 4000 communication processor hardware. The operating system and the packet switching software were developed by Telenet (later on GTE Telenet). BT bought Telenet's system via Plessey Controls of Poole, Dorset who also sold Telex and Traffic light systems. PSS was launched before Telenet's own upgrade of its network and, at the time, most other networks still used general purpose mini-computers as packet switches.
For a brief time the EEC operated a packet switched network, Euronet, and a related project Diane to encourage more database and network services to develop in Europe. These connections moved over to PSS and other European networks as commercial X.25 services launched.
Later on the InterStream gateway between the Telex network and PSS was introduced based on a low speed PAD interface.
In addition, BT used Telematics packet switches for the Vascom network to support the Prestel service.
The network management systems were based in London and Manchester. Packet switches were installed at major trunk exchanges in most major conurbations in the UK. Network management was run on a system of 24 Prime 63xx and 48xx computers running a modified versions of Revisions 20 and 22 of the Primos operating system.
The DNICs used by IPSS and PSS were 2341 and 2342 respectively.
The last PSS node in the UK was finally switched off Wednesday, June 28, 2006.
Description
Companies and individual users could connect into the PSS network using the full X.25 interface, via a dedicated four-wire telephone circuit using a PSS analog modem and later on, when problems of 10-100 ms transmission failures with the PCM Voice based transmission equipment used by the early Kilostream service were resolved, via a Kilostream digital access circuit (actually a baseband modem). In this early 1980s era installation lead times for suitable 4-wire analog lines could be more than 6 months in the UK.
Companies and individual users could also connect into the PSS network using a basic non-error correcting RS232/V.24 asynchronous character based interface via an X.3/X.28/X.29 PAD (Packet Assembler/Disassembler) service oriented to the then prevalent dumb terminal market place. The PAD service could be connected to via a dedicated four-wire telephone circuit using a PSS analog modem and later on via a Kilostream digital access circuit. However most customers, for cost reasons, chose to dial up via an analog modem over the then UK analog telephony network to their nearest public PAD, via published phone numbers, using an ID/password provided as a subscription service.
The current day analogy of ISPs offering broadband always on and dial up services to the internet applies here. Some customers connected to the PSS network via the X.25 service and bought their own PADs. PSS was one of the first telecommunications networks in the UK to be fully liberalised in that customers could connect their own equipment to the network. This was before privatisation and the creation of British Telecommunications plc (BT) in 1984.
Connectivity to databases and mainframe systems
PSS could be used to connect to a variety of online databases and mainframe systems. Of particular note was the use of PSS for the first networked Clearing House Automated Payment System (CHAPS). This was a network system used to transfer all payments over £10,000 GBP (in early 1980s monetary value) between the major UK banks and other major financial institutions based in the UK. It replaced a paper based system that operated in the City of London using electrical vehicles similar to milk floats. Logica (now LogicaCMG) designed the CHAPS system and incorporated an encryption system able to cope with HDLC bit stuffing on X.25 links.
Speeds
There was a choice of different speeds of PSS lines; the faster the line the more expensive it cost to rent it. The highest and lowest speed lines were provided by the Megastream and Kilostream services, 2M (Mega) bit/s and 256K (kilo) bit/s respectively. On analog links 2400 bit/s, 4800 bit/s, 9600 bit/s and 48 kbit/s were offered. Individual users could link into PSS, on a pay as you go basis, by using a 110, 300, 1200/75, 1,200 or 2,400 bit/s PSTN modem to connect a Data Terminal Equipment terminal into a local PSS exchange. Note: in those days 2,400 bit/s modems were quite rare; 1,200 bit/s was the usual speed in the 1980s, although 110 and 300 bit/s modems were not uncommon.
Investment challenges
Early years
PSS suffered from inconsistent investment during its early years. Sometimes not enough and sometimes too much but mostly for the wrong reasons. BT's attitude to packet switching was ambivalent at best. France's Transpac had a separate commercial company with dedicated management and saw X.25 packet switching as a core offering. BT's then senior management regarded packet switching as a passing phase until the telecommunications nirvana of ISDN's 64 kbit/s for everyone arrived.
Tymnet acquisition and exchange for other assets
BT bought the Tymnet network from McDonnell Douglas. BT subsequently exchanged major US elements of the Tymnet business with MCI for other assets when the proposed merger of their two businesses was thwarted by MCI's purchase by WorldCom.
In the words of BT's own history:British Telecom purchased the Tymnet network systems business and its associated applications activities from the McDonnell Douglas Corporation on 19 November (1989) for $355 million. Its activities included TYMNET, the public network business, plus its associates private and hybrid (mixed public and private) network activities, the OnTyme electronic mail service, the Card Service processing business, and EDI*Net, the US market leader in electronic data interchange.
BT Tymnet anticipated developing an end to end managed network service for multi-national customers, and developing dedicated or hybrid networks that embraced major trading areas. Customers would be able to enjoy one-stop-shopping for global data networks, and a portfolio of products designed for a global market place.
These services were subsequently offered by BT Global Network Services, and subsequently by Concert as part of Concert Global Network Services after the Concert joint venture company was launched on 15 June 1994.
Later years
Even in later years, BT's senior management stated that the Internet was "not fit for purpose".
Investments in value added network services (VANS) and BT's own access level packet switching hardware delayed operating profit. This in turn dented PSS's low credibility with BT's management still further. Despite healthy demand for basic X.25 services and the obvious trend for more demanding bandwidth intensive applications that required investment in more powerful switches a decision to develop BT's own hardware and network applications was made instead.
In the midst of this IBM (the then market leader in computing) and BT attempted to launch a joint venture, called Jove, for managed SNA services in the UK. And for a time significant extra expenditure was allowed for BT's data services, PSS being the major part, as one concern of regulators was this joint venture might damage work on Open Systems Interconnection. This only made cost control worse and achieving operating profit delayed further. Eventually the UK government decided the SNA joint venture was anti-competitive and vetoed it. But not before PSS management was allowed to commit to large investments that caused serious problems later.
One of the few successful value added applications was the transaction phone used to check credit cards by retailer to validate transactions and prevent fraud. It was believed that putting a packet switch in every local telephone exchange would allow this and other low bandwidth applications to drive revenue. The lesson of Tymnet's similar transaction phone that just used a dial up link to a standard PAD based service was not followed. Each low end packet switch installed added costs for floor space, power, etc. without any significant value added revenue benefit resulting. Nor were they adequate for X.25 host traffic.
Ideas such as providing a menu based interface, called Epad, more user-friendly than X.28 was proven obsolete by the advent of Windows-based clients on PCs.
As the added value services, named PSS Plus collectively, added significant costs and headcount while contributed virtually no revenue a change in PSS's management eventually resulted. While a decision was eventually made to put some of the basic network services people in senior positions and try to launch what had been developed this proved to be a major mistake. An exodus of people who were developing the value added network services helped reduce some costs. However significant on-going expenditure had been committed already to manufacture packet switch hardware and by using the very expensive Tandem computers in existing VANS. Operating profit was still not achieved and a further change in management with McKinsey consulting being called in.
McKinsey's recommendation that increasing revenue while cutting costs was required to turn around the business was duly followed by the new management and an operating profit achieved in about 1988. This rested on running PSS efficiently and cutting the VANS as much as possible. PSS was then merged with other failing business like Prestel as it became part of a larger Managed Network Services division that was used to fix or close BT's problem businesses.
Legacy
Communication protocols
Researchers on EPSS in the UK and elsewhere identified the need for defining higher-level protocols. The UK National Computing Centre publication 'Why Distributed Computing', which was based on extensive research into future potential configurations for computer systems. This resulted in the UK presenting the case for an international standards committee to cover this area at the ISO meeting in Sydney in March 1977. This work led to the OSI reference model in 1984 and the subsequent Internet-OSI Standards War.
Commercial
While PSS eventually went the way of all X.25 networks and was overwhelmed by the internet and more significantly the internet's superior application suite and cost model, BT did not capitalise as much as other packet switch operators by subsequent mistakes concerning the internet, Tymnet, BT's North American operations and the Concert Global Services with ATT.
BT's failure to become the major ISP in its own home market unlike every other former PTT and the success of Dixon's Freeserve, Demon and Energis based virtual ISPs in the same sector has only been recovered from recently. Only after BT changed its most senior management who were fixated on circuit switching/ISDN based on System X/Y telephone exchanges and embracing broadband/internet lock stock and barrel has this changed. An emergency rights issue also helped resolve the debt from acquiring second or third ranked old telcos style companies around the world.
Now BT appears to be inheriting a dominating position in the Global Network Services market, based on packet switching, as CSC and Reuters sell up their networks to BT. As the commodity price of IP services based in their core 21st century MPLS network to carry voice and data finally gives them the real cost efficiencies that packet switching always promised.
See also
Internet in the United Kingdom § History
Telecommunications in the United Kingdom
References
External links
Pictures of the BT PSS equipment
BT Group
General Post Office
History of computing in the United Kingdom
History of telecommunications in the United Kingdom
Packets (information technology)
X.25 | Packet Switch Stream | [
"Technology"
] | 2,768 | [
"History of computing",
"History of computing in the United Kingdom"
] |
312,076 | https://en.wikipedia.org/wiki/Spike%20strip | A spike strip (also referred to as a spike belt, road spikes, traffic spikes, tire shredders, stingers, stop sticks, by the trademark Stinger or formally known as a Tire Deflation Device or TDD) is a device or incident weapon used to impede or stop the movement of wheeled vehicles by puncturing their tires.
Generally, the strip is composed of a collection of metal barbs, teeth or spikes pointing upward. The spikes are designed to puncture and flatten tires when a vehicle is driven over them; they may be portable, as a police weapon, or strongly secured to the ground, as those found at security checkpoint entrances in certain facilities. (These particular models, however, retract and do not cause damage when a vehicle drives over them from the proper direction.) They also may be detachable, with new spikes fitted to the strip after use. The spikes may be hollow or solid; hollow ones are designed to detach and become embedded in the tires, allowing air to escape at a steady rate to reduce the risk of the driver losing control and crashing. They are historically a development of the caltrop, with anti-cavalry and anti-personnel versions being used as early as 331 BC by Darius III against Alexander the Great at the Battle of Gaugamela in Persia.
In the United States, five officers were killed deploying spike strips in 2011, having been struck by fleeing vehicles. Dallas, Texas police are among those banned from using them, in response to the hazards.
Remotely deployable spike strips have been invented to reduce the danger to police officers deploying them.
Private possession of spike strips was banned in New South Wales, Australia in 2003 after a strip cheaply constructed from a steel pipe studded with nails was used against a police vehicle. John Watkins, a member of New South Wales Legislative Assembly, stated they would be added to the New South Wales prohibited weapons list.
Following the rise in terrorist vehicle attacks whereby a vehicle is driven at speed into pedestrians, a net with steel spikes that can be deployed by two people in less than a minute, reported able to stop a vehicle of up to 17 tonnes, was developed for preventive use at public events in the UK, with the name "Talon". It has steel spikes to puncture tires, and becomes entangled around the front wheels, halting the vehicle. It is designed to reduce risk to crowds by making the vehicle skid in a straight line without veering unpredictably. It was first deployed to protect a parade on 11 September 2017.
See also
Car chase
References
Engineering barrages
Law enforcement equipment
Espionage devices
Military equipment
Area denial weapons
Tires
Road hazards | Spike strip | [
"Technology",
"Engineering"
] | 543 | [
"Area denial weapons",
"Military engineering",
"Engineering barrages",
"Road hazards"
] |
312,129 | https://en.wikipedia.org/wiki/Lossless%20predictive%20audio%20compression | Lossless predictive audio compression (LPAC) is an improved lossless audio compression algorithm developed by Tilman Liebchen, Marcus Purat and Peter Noll at the Institute for Telecommunications, Technische Universität Berlin (TU Berlin), to compress PCM audio in a lossless manner, in contrast to lossy compression algorithms.
It is no longer developed because an advanced version of it has become an official standard under the name of MPEG-4 Audio Lossless Coding.
See also
Free Lossless Audio Codec (FLAC)
Lossless Transform Audio Compression (LTAC)
Monkey's Audio (APE)
References
External links
Lossless Predictive Audio Compression (LPAC)
The basic principles of lossless audio data compression (TTA) The Lossless Audio Blog Lossless Audio News & Information Site.
Audio compression
Lossless audio codecs
Data compression | Lossless predictive audio compression | [
"Engineering"
] | 179 | [
"Audio engineering",
"Audio compression"
] |
312,152 | https://en.wikipedia.org/wiki/Spontaneous%20process | In thermodynamics, a spontaneous process is a process which occurs without any external input to the system. A more technical definition is the time-evolution of a system in which it releases free energy and it moves to a lower, more thermodynamically stable energy state (closer to thermodynamic equilibrium). The sign convention for free energy change follows the general convention for thermodynamic measurements, in which a release of free energy from the system corresponds to a negative change in the free energy of the system and a positive change in the free energy of the surroundings.
Depending on the nature of the process, the free energy is determined differently. For example, the Gibbs free energy change is used when considering processes that occur under constant pressure and temperature conditions, whereas the Helmholtz free energy change is used when considering processes that occur under constant volume and temperature conditions. The value and even the sign of both free energy changes can depend upon the temperature and pressure or volume.
Because spontaneous processes are characterized by a decrease in the system's free energy, they do not need to be driven by an outside source of energy.
For cases involving an isolated system where no energy is exchanged with the surroundings, spontaneous processes are characterized by an increase in entropy.
A spontaneous reaction is a chemical reaction which is a spontaneous process under the conditions of interest.
Overview
In general, the spontaneity of a process only determines whether or not a process can occur and makes no indication as to whether or not the process will occur at an observable rate. In other words, spontaneity is a necessary, but not sufficient, condition for a process to actually occur. Furthermore, spontaneity makes no implication as to the speed at which the spontaneous process may occur - just because a process is spontaneous does not mean it will happen quickly (or at all).
As an example, the conversion of a diamond into graphite is a spontaneous process at room temperature and pressure. Despite being spontaneous, this process does not occur since the energy to break the strong carbon-carbon bonds is larger than the release in free energy. Another way to explain this would be that even though the conversion of diamond into graphite is thermodynamically feasible and spontaneous even at room temperature, the high activation energy of this reaction renders it too slow to observe.
Using free energy to determine spontaneity
For a process that occurs at constant temperature and pressure, spontaneity can be determined using the change in Gibbs free energy, which is given by:
where the sign of ΔG depends on the signs of the changes in enthalpy (ΔH) and entropy (ΔS). If these two signs are the same (both positive or both negative), then the sign of ΔG will change from positive to negative (or vice versa) at the temperature
In cases where ΔG is:
negative, the process is spontaneous and may proceed in the forward direction as written.
positive, the process is non-spontaneous as written, but it may proceed spontaneously in the reverse direction.
zero, the process is at equilibrium, with no net change taking place over time.
This set of rules can be used to determine four distinct cases by examining the signs of the ΔS and ΔH.
When ΔS > 0 and ΔH < 0, the process is always spontaneous as written.
When ΔS < 0 and ΔH > 0, the process is never spontaneous, but the reverse process is always spontaneous.
When ΔS > 0 and ΔH > 0, the process will be spontaneous at high temperatures and non-spontaneous at low temperatures.
When ΔS < 0 and ΔH < 0, the process will be spontaneous at low temperatures and non-spontaneous at high temperatures.
For the latter two cases, the temperature at which the spontaneity changes will be determined by the relative magnitudes of ΔS and ΔH.
Using entropy to determine spontaneity
When using the entropy change of a process to assess spontaneity, it is important to carefully consider the definition of the system and surroundings. The second law of thermodynamics states that a process involving an isolated system will be spontaneous if the entropy of the system increases over time. For open or closed systems, however, the statement must be modified to say that the total entropy of the combined system and surroundings must increase, or,
This criterion can then be used to explain how it is possible for the entropy of an open or closed system to decrease during a spontaneous process. A decrease in system entropy can only occur spontaneously if the entropy change of the surroundings is both positive in sign and has a larger magnitude than the entropy change of the system:
and
In many processes, the increase in entropy of the surroundings is accomplished via heat transfer from the system to the surroundings (i.e. an exothermic process).
See also
Endergonic reaction reactions which are not spontaneous at standard temperature, pressure, and concentrations.
Diffusion spontaneous phenomenon that minimizes Gibbs free energy.
References
Thermodynamics
Chemical thermodynamics
Chemical processes | Spontaneous process | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,037 | [
"Chemical thermodynamics",
"Chemical processes",
"Thermodynamics",
"nan",
"Chemical process engineering",
"Dynamical systems"
] |
312,212 | https://en.wikipedia.org/wiki/Trigram | Trigrams are a special case of the n-gram, where n is 3. They are often used in natural language processing for performing statistical analysis of texts and in cryptography for control and use of ciphers and codes. See results of analysis of "Letter Frequencies in the English Language".
Frequency
Context is very important, varying analysis rankings and percentages are easily derived by drawing from different sample sizes, different authors; or different document types: poetry, science-fiction, technology documentation; and writing levels: stories for children versus adults, military orders, and recipes.
Typical cryptanalytic frequency analysis finds that the 16 most common character-level trigrams in English are:
Because encrypted messages sent by telegraph often omit punctuation and spaces, cryptographic frequency analysis of such messages includes trigrams that straddle word boundaries. This causes trigrams such as "edt" to occur frequently, even though it may never occur in any one word of those messages.
Examples
The sentence "the quick red fox jumps over the lazy brown dog" has the following word-level trigrams:
the quick red
quick red fox
red fox jumps
fox jumps over
jumps over the
over the lazy
the lazy brown
lazy brown dog
And the word-level trigram "the quick red" has the following character-level trigrams (where an underscore "_" marks a space):
the
he_
e_q
_qu
qui
uic
ick
ck_
k_r
_re
red
References
Natural language processing
Computational linguistics
Speech recognition | Trigram | [
"Technology"
] | 317 | [
"Natural language processing",
"Natural language and computing",
"Computational linguistics"
] |
312,229 | https://en.wikipedia.org/wiki/Chiral%20anomaly | In theoretical physics, a chiral anomaly is the anomalous nonconservation of a chiral current. In everyday terms, it is equivalent to a sealed box that contained equal numbers of left and right-handed bolts, but when opened was found to have more left than right, or vice versa.
Such events are expected to be prohibited according to classical conservation laws, but it is known there must be ways they can be broken, because we have evidence of charge–parity non-conservation ("CP violation"). It is possible that other imbalances have been caused by breaking of a chiral law of this kind. Many physicists suspect that the fact that the observable universe contains more matter than antimatter is caused by a chiral anomaly. Research into chiral symmetry breaking laws is a major endeavor in particle physics research at this time.
Informal introduction
The chiral anomaly originally referred to the anomalous decay rate of the neutral pion, as computed in the current algebra of the chiral model. These calculations suggested that the decay of the pion was suppressed, clearly contradicting experimental results. The nature of the anomalous calculations was first explained in 1969 by Stephen L. Adler and John Stewart Bell & Roman Jackiw. This is now termed the Adler–Bell–Jackiw anomaly of quantum electrodynamics. This is a symmetry of classical electrodynamics that is violated by quantum corrections.
The Adler–Bell–Jackiw anomaly arises in the following way. If one considers the classical (non-quantized) theory of electromagnetism coupled to massless fermions (electrically charged Dirac spinors solving the Dirac equation), one expects to have not just one but two conserved currents: the ordinary electrical current (the vector current), described by the Dirac field as well as an axial current When moving from the classical theory to the quantum theory, one may compute the quantum corrections to these currents; to first order, these are the one-loop Feynman diagrams. These are famously divergent, and require a regularization to be applied, to obtain the renormalized amplitudes. In order for the renormalization to be meaningful, coherent and consistent, the regularized diagrams must obey the same symmetries as the zero-loop (classical) amplitudes. This is the case for the vector current, but not the axial current: it cannot be regularized in such a way as to preserve the axial symmetry. The axial symmetry of classical electrodynamics is broken by quantum corrections. Formally, the Ward–Takahashi identities of the quantum theory follow from the gauge symmetry of the electromagnetic field; the corresponding identities for the axial current are broken.
At the time that the Adler–Bell–Jackiw anomaly was being explored in physics, there were related developments in differential geometry that appeared to involve the same kinds of expressions. These were not in any way related to quantum corrections of any sort, but rather were the exploration of the global structure of fiber bundles, and specifically, of the Dirac operators on spin structures having curvature forms resembling that of the electromagnetic tensor, both in four and three dimensions (the Chern–Simons theory). After considerable back and forth, it became clear that the structure of the anomaly could be described with bundles with a non-trivial homotopy group, or, in physics lingo, in terms of instantons.
Instantons are a form of topological soliton; they are a solution to the classical field theory, having the property that they are stable and cannot decay (into plane waves, for example). Put differently: conventional field theory is built on the idea of a vacuum – roughly speaking, a flat empty space. Classically, this is the "trivial" solution; all fields vanish. However, one can also arrange the (classical) fields in such a way that they have a non-trivial global configuration. These non-trivial configurations are also candidates for the vacuum, for empty space; yet they are no longer flat or trivial; they contain a twist, the instanton. The quantum theory is able to interact with these configurations; when it does so, it manifests as the chiral anomaly.
In mathematics, non-trivial configurations are found during the study of Dirac operators in their fully generalized setting, namely, on Riemannian manifolds in arbitrary dimensions. Mathematical tasks include finding and classifying structures and configurations. Famous results include the Atiyah–Singer index theorem for Dirac operators. Roughly speaking, the symmetries of Minkowski spacetime, Lorentz invariance, Laplacians, Dirac operators and the U(1)xSU(2)xSU(3) fiber bundles can be taken to be a special case of a far more general setting in differential geometry; the exploration of the various possibilities accounts for much of the excitement in theories such as string theory; the richness of possibilities accounts for a certain perception of lack of progress.
The Adler–Bell–Jackiw anomaly is seen experimentally, in the sense that it describes the decay of the neutral pion, and specifically, the width of the decay of the neutral pion into two photons. The neutral pion itself was discovered in the 1940s; its decay rate (width) was correctly estimated by J. Steinberger in 1949. The correct form of the anomalous divergence of the axial current is obtained by Schwinger in 1951 in a 2D model of electromagnetism and massless fermions. That the decay of the neutral pion is suppressed in the current algebra analysis of the chiral model is obtained by Sutherland and Veltman in 1967. An analysis and resolution of this anomalous result is provided by Adler and Bell & Jackiw in 1969. A general structure of the anomalies is discussed by Bardeen in 1969.
The quark model of the pion indicates it is a bound state of a quark and an anti-quark. However, the quantum numbers, including parity and angular momentum, taken to be conserved, prohibit the decay of the pion, at least in the zero-loop calculations (quite simply, the amplitudes vanish.) If the quarks are assumed to be massive, not massless, then a chirality-violating decay is allowed; however, it is not of the correct size. (Chirality is not a constant of motion of massive spinors; they will change handedness as they propagate, and so mass is itself a chiral symmetry-breaking term. The contribution of the mass is given by the Sutherland and Veltman result; it is termed "PCAC", the partially conserved axial current.) The Adler–Bell–Jackiw analysis provided in 1969 (as well as the earlier forms by Steinberger and Schwinger), do provide the correct decay width for the neutral pion.
Besides explaining the decay of the pion, it has a second very important role. The one loop amplitude includes a factor that counts the grand total number of leptons that can circulate in the loop. In order to get the correct decay width, one must have exactly three generations of quarks, and not four or more. In this way, it plays an important role in constraining the Standard model. It provides a direct physical prediction of the number of quarks that can exist in nature.
Current day research is focused on similar phenomena in different settings, including non-trivial topological configurations of the electroweak theory, that is, the sphalerons. Other applications include the hypothetical non-conservation of baryon number in GUTs and other theories.
General discussion
In some theories of fermions with chiral symmetry, the quantization may lead to the breaking of this (global) chiral symmetry. In that case, the charge associated with the chiral symmetry is not conserved. The non-conservation happens in a process of tunneling from one vacuum to another. Such a process is called an instanton.
In the case of a symmetry related to the conservation of a fermionic particle number, one may understand the creation of such particles as follows. The definition of a particle is different in the two vacuum states between which the tunneling occurs; therefore a state of no particles in one vacuum corresponds to a state with some particles in the other vacuum. In particular, there is a Dirac sea of fermions and, when such a tunneling happens, it causes the energy levels of the sea fermions to gradually shift upwards for the particles and downwards for the anti-particles, or vice versa. This means particles which once belonged to the Dirac sea become real (positive energy) particles and particle creation happens.
Technically, in the path integral formulation, an anomalous symmetry is a symmetry of the action , but not of the measure and therefore not of the generating functional
of the quantized theory ( is Planck's action-quantum divided by 2). The measure consists of a part depending on the fermion field and a part depending on its complex conjugate . The transformations of both parts under a chiral symmetry do not cancel in general. Note that if is a Dirac fermion, then the chiral symmetry can be written as where is the chiral gamma matrix acting on . From the formula for one also sees explicitly that in the classical limit, anomalies don't come into play, since in this limit only the extrema of remain relevant.
The anomaly is proportional to the instanton number of a gauge field to which the fermions are coupled. (Note that the gauge symmetry is always non-anomalous and is exactly respected, as is required for the theory to be consistent.)
Calculation
The chiral anomaly can be calculated exactly by one-loop Feynman diagrams, e.g. Steinberger's "triangle diagram", contributing to the pion decays, and . The amplitude for this process can be calculated directly from the change in the measure of the fermionic fields under the chiral transformation.
Wess and Zumino developed a set of conditions on how the partition function ought to behave under gauge transformations called the Wess–Zumino consistency condition.
Fujikawa derived this anomaly using the correspondence between functional determinants and the partition function using the Atiyah–Singer index theorem. See Fujikawa's method.
An example: baryon number non-conservation
The Standard Model of electroweak interactions has all the necessary ingredients for successful baryogenesis, although these interactions have never been observed and may be insufficient to explain the total baryon number of the observed universe if the initial baryon number of the universe at the time of the Big Bang is zero. Beyond the violation of charge conjugation and CP violation (charge+parity), baryonic charge violation appears through the Adler–Bell–Jackiw anomaly of the group.
Baryons are not conserved by the usual electroweak interactions due to quantum chiral anomaly. The classic electroweak Lagrangian conserves baryonic charge. Quarks always enter in bilinear combinations , so that a quark can disappear only in collision with an antiquark. In other words, the classical baryonic current is conserved:
However, quantum corrections known as the sphaleron destroy this conservation law: instead of zero in the right hand side of this equation, there is a non-vanishing quantum term,
where is a numerical constant vanishing for ℏ =0,
and the gauge field strength is given by the expression
Electroweak sphalerons can only change the baryon and/or lepton number by 3 or multiples of 3 (collision of three baryons into three leptons/antileptons and vice versa).
An important fact is that the anomalous current non-conservation is proportional to the total derivative of a vector operator, (this is non-vanishing due to instanton configurations of the gauge field, which are pure gauge at the infinity), where the anomalous current is
which is the Hodge dual of the Chern–Simons 3-form.
Geometric form
In the language of differential forms, to any self-dual curvature form we may assign the abelian 4-form . Chern–Weil theory shows that this 4-form is locally but not globally exact, with potential given by the Chern–Simons 3-form locally:
.
Again, this is true only on a single chart, and is false for the global form unless the instanton number vanishes.
To proceed further, we attach a "point at infinity" onto to yield , and use the clutching construction to chart principal A-bundles, with one chart on the neighborhood of and a second on . The thickening around , where these charts intersect, is trivial, so their intersection is essentially . Thus instantons are classified by the third homotopy group , which for is simply the third 3-sphere group .
The divergence of the baryon number current is (ignoring numerical constants)
,
and the instanton number is
.
See also
Anomaly (physics)
Chiral magnetic effect
Global anomaly
Gravitational anomaly
Strong CP problem
References
Further reading
Published articles
Textbooks
Preprints
Anomalies (physics)
Quantum chromodynamics
Standard Model
Conservation laws | Chiral anomaly | [
"Physics"
] | 2,731 | [
"Standard Model",
"Equations of physics",
"Conservation laws",
"Particle physics",
"Symmetry",
"Physics theorems"
] |
312,249 | https://en.wikipedia.org/wiki/Bryophyte | Bryophytes () are a group of land plants (embryophytes), sometimes treated as a taxonomic division, that contains three groups of non-vascular land plants: the liverworts, hornworts, and mosses. In the strict sense, the division Bryophyta consists of the mosses only. Bryophytes are characteristically limited in size and prefer moist habitats although some species can survive in drier environments. The bryophytes consist of about 20,000 plant species. Bryophytes produce enclosed reproductive structures (gametangia and sporangia), but they do not produce flowers or seeds. They reproduce sexually by spores and asexually by fragmentation or the production of gemmae.
Though bryophytes were considered a paraphyletic group in recent years, almost all of the most recent phylogenetic evidence supports the monophyly of this group, as originally classified by Wilhelm Schimper in 1879.
The term bryophyte comes .
Features
The defining features of bryophytes are:
Their life cycles are dominated by a multicellular haploid gametophyte stage
Their sporophytes are diploid and unbranched
They do not have a true vascular tissue containing lignin (although some have specialized tissues for the transport of water)
Habitat
Bryophytes exist in a wide variety of habitats. They can be found growing in a range of temperatures (cold arctics and in hot deserts), elevations (sea-level to alpine), and moisture (dry deserts to wet rain forests). Bryophytes can grow where vascularized plants cannot because they do not depend on roots for uptake of nutrients from soil. Bryophytes can survive on rocks and bare soil.
Life cycle
Like all land plants (embryophytes), bryophytes have life cycles with alternation of generations. In each cycle, a haploid gametophyte, each of whose cells contains a fixed number of unpaired chromosomes, alternates with a diploid sporophyte, whose cells contain two sets of paired chromosomes. Gametophytes produce haploid sperm and eggs which fuse to form diploid zygotes that grow into sporophytes. Sporophytes produce haploid spores by meiosis, that grow into gametophytes.
Bryophytes are gametophyte dominant, meaning that the more prominent, longer-lived plant is the haploid gametophyte. The diploid sporophytes appear only occasionally and remain attached to and nutritionally dependent on the gametophyte. In bryophytes, the sporophytes are always unbranched and produce a single sporangium (spore producing capsule), but each gametophyte can give rise to several sporophytes at once.
Liverworts, mosses and hornworts spend most of their lives as gametophytes. Gametangia (gamete-producing organs), archegonia and antheridia, are produced on the gametophytes, sometimes at the tips of shoots, in the axils of leaves or hidden under thalli. Some bryophytes, such as the liverwort Marchantia, create elaborate structures to bear the gametangia that are called gametangiophores. Sperm are flagellated and must swim from the antheridia that produce them to archegonia which may be on a different plant. Arthropods can assist in transfer of sperm.
Fertilized eggs become zygotes, which develop into sporophyte embryos inside the archegonia. Mature sporophytes remain attached to the gametophyte. They consist of a stalk called a seta and a single sporangium or capsule. Inside the sporangium, haploid spores are produced by meiosis. These are dispersed, most commonly by wind, and if they land in a suitable environment can develop into a new gametophyte. Thus bryophytes disperse by a combination of swimming sperm and spores, in a manner similar to lycophytes, ferns and other cryptogams.
The sporophyte develops differently in the three groups. Both mosses and hornworts have a meristem zone where cell division occurs. In hornworts, the meristem starts at the base where the foot ends, and the division of cells pushes the sporophyte body upwards. In mosses, the meristem is located between the capsule and the top of the stalk (seta), and produces cells downward, elongating the stalk and elevating the capsule. In liverworts the meristem is absent and the elongation of the sporophyte is caused almost exclusively by cell expansion.
Sexuality
The arrangement of antheridia and archegonia on an individual bryophyte plant is usually constant within a species, although in some species it may depend on environmental conditions. The main division is between species in which the antheridia and archegonia occur on the same plant and those in which they occur on different plants. The term monoicous may be used where antheridia and archegonia occur on the same gametophyte and the term dioicous where they occur on different gametophytes.
In seed plants, "monoecious" is used where flowers with anthers (microsporangia) and flowers with ovules (megasporangia) occur on the same sporophyte and "dioecious" where they occur on different sporophytes. These terms occasionally may be used instead of "monoicous" and "dioicous" to describe bryophyte gametophytes. "Monoecious" and "monoicous" are both derived from the Greek for "one house", "dioecious" and "dioicous" from the Greek for two houses. The use of the "-oicy" terminology refers to the gametophyte sexuality of bryophytes as distinct from the sporophyte sexuality of seed plants.
Monoicous plants are necessarily hermaphroditic, meaning that the same plant produces gametes of both sexes. The exact arrangement of the antheridia and archegonia in monoicous plants varies. They may be borne on different shoots (autoicous), on the same shoot but not together in a common structure (paroicous or paroecious), or together in a common "inflorescence" (synoicous or synoecious). Dioicous plants are unisexual, meaning that an individual plant has only one sex. All four patterns (autoicous, paroicous, synoicous and dioicous) occur in species of the moss genus Bryum.
Classification and phylogeny
Traditionally, all living land plants without vascular tissues were classified in a single taxonomic group, often a division (or phylum). The term "Bryophyta" was first suggested by Braun in 1864. As early as 1879, the term Bryophyta was used by German bryologist Wilhelm Schimper to describe a group containing all three bryophyte clades (though at the time, hornworts were considered part of the liverworts). G.M. Smith placed this group between Algae and Pteridophyta. Although a 2005 study supported this traditional monophyletic view, by 2010 a broad consensus had emerged among systematists that bryophytes as a whole are not a natural group (i.e., are paraphyletic). However, a 2014 study concluded that these previous phylogenies, which were based on nucleic acid sequences, were subject to composition biases, and that, furthermore, phylogenies based on amino acid sequences suggested that the bryophytes are monophyletic after all. Since then, partially thanks to a proliferation of genomic and transcriptomic datasets, almost all phylogenetics studies based on nuclear and chloroplastic sequences have concluded that the bryophytes form a monophyletic group. Nevertheless, phylogenies based on mitochondrial sequences fail to support the monophyletic view.
The three bryophyte clades are the Marchantiophyta (liverworts), Bryophyta (mosses) and Anthocerotophyta (hornworts). However, it has been proposed that these clades are de-ranked to the classes Marchantiopsida, Bryopsida, and Anthocerotopsida, respectively. There is now strong evidence that the liverworts and mosses belong to a monophyletic clade, called Setaphyta.
Monophyletic view
The favored model, based on amino acids phylogenies, indicates bryophytes as a monophyletic group:
Consistent with this view, compared to other living land plants, all three lineages lack vascular tissue containing lignin and branched sporophytes bearing multiple sporangia. The prominence of the gametophyte in the life cycle is also a shared feature of the three bryophyte lineages (extant vascular plants are all sporophyte dominant). However, if this phylogeny is correct, then the complex sporophyte of living vascular plants might have evolved independently of the simpler unbranched sporophyte present in bryophytes. Furthermore, this view implies that stomata evolved only once in plant evolution, before being subsequently lost in the liverworts.
Paraphyletic view
In this alternative view, the Setaphyta grouping is retained, but hornworts instead are sister to vascular plants. (Another paraphyletic view involves hornworts branching out first.)
Traditional morphology
Traditionally, when basing classifications on morphological characters, bryophytes have been distinguished by their lack of vascular structure. However, this distinction is problematic, firstly because some of the earliest-diverging (but now extinct) non-bryophytes, such as the horneophytes, did not have true vascular tissue, and secondly because many mosses have well-developed water-conducting vessels. A more useful distinction may lie in the structure of their sporophytes. In bryophytes, the sporophyte is a simple unbranched structure with a single spore-forming organ (sporangium), whereas in all other land plants, the polysporangiophytes, the sporophyte is branched and carries many sporangia. The contrast is shown in the cladogram below:
Evolution
There have probably been several different terrestrialization events, in which originally aquatic organisms colonized the land, just within the lineage of the Viridiplantae. Between 510 and 630 million years ago, however, land plants emerged within the green algae. Molecular phylogenetic studies conclude that bryophytes are the earliest diverging lineages of the extant land plants. They provide insights into the migration of plants from aquatic environments to land. A number of physical features link bryophytes to both land plants and aquatic plants.
Similarities to algae and vascular plants
Green algae, bryophytes and vascular plants all have chlorophyll a and b, and the chloroplast structures are similar. Like green algae and land plants, bryophytes also produce starch stored in the plastids and contain cellulose in their walls. Distinct adaptations observed in bryophytes have allowed plants to colonize Earth's terrestrial environments. To prevent desiccation of plant tissues in a terrestrial environment, a waxy cuticle covering the soft tissue of the plant may be present, providing protection. In hornworts and mosses, stomata provide gas exchange between the atmosphere and an internal intercellular space system. The development of gametangia provided further protection specifically for gametes, the zygote and the developing sporophyte. The bryophytes and vascular plants (embryophytes) also have embryonic development which is not seen in green algae. While bryophytes have no truly vascularized tissue, they do have organs that are specialized for transport of water and other specific functions, analogous for example to the functions of leaves and stems in vascular land plants.
Bryophytes depend on water for reproduction and survival. In common with ferns and lycophytes, a thin layer of water is required on the surface of the plant to enable the movement of the flagellated sperm between gametophytes and the fertilization of an egg.
Comparative morphology
Summary of the morphological characteristics of the gametophytes of the three groups of bryophytes:
Summary of the morphological characteristics of the sporophytes of the three groups of bryophytes:
Uses
Environmental
Characteristics of bryophytes make them useful to the environment. Depending on the specific plant texture, bryophytes have been shown to help improve the water retention and air space within soil. Bryophytes are used in pollution studies to indicate soil pollution (such as the presence of heavy metals), air pollution, and UV-B radiation. Gardens in Japan are designed with moss to create peaceful sanctuaries. Some bryophytes have been found to produce natural pesticides. The liverwort, Plagiochila, produces a chemical that is poisonous to mice. Other bryophytes produce chemicals that are antifeedants which protect them from being eaten by slugs. When Phythium sphagnum is sprinkled on the soil of germinating seeds, it inhibits growth of "damping off fungus" which would otherwise kill young seedlings.
Commercial
Peat is a fuel produced from dried bryophytes, typically Sphagnum. Bryophytes' antibiotic properties and ability to retain water make them a useful packaging material for vegetables, flowers, and bulbs. Also, because of its antibiotic properties, Sphagnum was used as a surgical dressing in World War I.
See also
Plant sexuality
List of British county and local bryophyte floras
Thallophyta
References
Bibliography
External links
Andrew's Moss Site Photos of bryophytes
27-May-2013 Centuries-old frozen plants revived, 400-year-old bryophyte specimens left behind by retreating glaciers in Canada are brought back to life in the laboratory.
Magill, R. E., ed. (1990). Glossarium polyglottum bryologiae. A multilingual glossary for bryology. Monographs in Systematic Botany from the Missouri Botanical Garden, v. 33, 297 pp. Online version: Internet Archive.
Cryptogams
Paraphyletic groups | Bryophyte | [
"Biology"
] | 3,061 | [
"Plants",
"Cryptogams",
"Bryophytes",
"Paraphyletic groups",
"Phylogenetics",
"Eukaryotes"
] |
312,250 | https://en.wikipedia.org/wiki/Partition%20function%20%28number%20theory%29 | In number theory, the partition function represents the number of possible partitions of a non-negative integer . For instance, because the integer 4 has the five partitions , , , , and .
No closed-form expression for the partition function is known, but it has both asymptotic expansions that accurately approximate it and recurrence relations by which it can be calculated exactly. It grows as an exponential function of the square root of its argument. The multiplicative inverse of its generating function is the Euler function; by Euler's pentagonal number theorem this function is an alternating sum of pentagonal number powers of its argument.
Srinivasa Ramanujan first discovered that the partition function has nontrivial patterns in modular arithmetic, now known as Ramanujan's congruences. For instance, whenever the decimal representation of ends in the digit 4 or 9, the number of partitions of will be divisible by 5.
Definition and examples
For a positive integer , is the number of distinct ways of representing as a sum of positive integers. For the purposes of this definition, the order of the terms in the sum is irrelevant: two sums with the same terms in a different order are not considered to be distinct.
By convention , as there is one way (the empty sum) of representing zero as a sum of positive integers. Furthermore when is negative.
The first few values of the partition function, starting with , are:
Some exact values of for larger values of include:
Generating function
The generating function for p(n) is given by
The equality between the products on the first and second lines of this formula
is obtained by expanding each factor into the geometric series
To see that the expanded product equals the sum on the first line,
apply the distributive law to the product. This expands the product into a sum of monomials of the form for some sequence of coefficients
, only finitely many of which can be non-zero.
The exponent of the term is , and this sum can be interpreted as a representation of as a partition into copies of each number . Therefore, the number of terms of the product that have exponent is exactly , the same as the coefficient of in the sum on the left.
Therefore, the sum equals the product.
The function that appears in the denominator in the third and fourth lines of the formula is the Euler function. The equality between the product on the first line and the formulas in the third and fourth lines is Euler's pentagonal number theorem.
The exponents of in these lines are the pentagonal numbers for (generalized somewhat from the usual pentagonal numbers, which come from the same formula for the positive values The pattern of positive and negative signs in the third line comes from the term in the fourth line: even choices of produce positive terms, and odd choices produce negative terms.
More generally, the generating function for the partitions of into numbers selected from a set of positive integers can be found by taking only those terms in the first product for which . This result is due to Leonhard Euler. The formulation of Euler's generating function is a special case of a -Pochhammer symbol and is similar to the product formulation of many modular forms, and specifically the Dedekind eta function.
Recurrence relations
The same sequence of pentagonal numbers appears in a recurrence relation for the partition function:
As base cases, is taken to equal , and is taken to be zero for negative . Although the sum on the right side appears infinite, it has only finitely many nonzero terms,
coming from the nonzero values of in the range
The recurrence relation can also be written in the equivalent form
Another recurrence relation for can be given in terms of the sum of divisors function :
If denotes the number of partitions of with no repeated parts then it follows by splitting each partition into its even parts and odd parts, and dividing the even parts by two, that
Congruences
Srinivasa Ramanujan is credited with discovering that the partition function has nontrivial patterns in modular arithmetic.
For instance the number of partitions is divisible by five whenever the decimal representation of ends in the digit 4 or 9, as expressed by the congruence
For instance, the number of partitions for the integer 4 is 5.
For the integer 9, the number of partitions is 30; for 14 there are 135 partitions. This congruence is implied by the more general identity
also by Ramanujan, where the notation denotes the product defined by
A short proof of this result can be obtained from the partition function generating function.
Ramanujan also discovered congruences modulo 7 and 11:
The first one comes from Ramanujan's identity
Since 5, 7, and 11 are consecutive primes, one might think that there would be an analogous congruence for the next prime 13, for some . However, there is no congruence of the form for any prime b other than 5, 7, or 11. Instead, to obtain a congruence, the argument of should take the form for some . In the 1960s, A. O. L. Atkin of the University of Illinois at Chicago discovered additional congruences of this form for small prime moduli. For example:
proved that there are such congruences for every prime modulus greater than 3. Later, showed there are partition congruences modulo every integer coprime to 6.
Approximation formulas
Approximation formulas exist that are faster to calculate than the exact formula given above.
An asymptotic expression for p(n) is given by
as .
This asymptotic formula was first obtained by G. H. Hardy and Ramanujan in 1918 and independently by J. V. Uspensky in 1920. Considering , the asymptotic formula gives about , reasonably close to the exact answer given above (1.415% larger than the true value).
Hardy and Ramanujan obtained an asymptotic expansion with this approximation as the first term:
where
Here, the notation means that the sum is taken only over the values of that are relatively prime to . The function is a Dedekind sum.
The error after terms is of the order of the next term, and may be taken to be of the order of . As an example, Hardy and Ramanujan showed that is the nearest integer to the sum of the first terms of the series.
In 1937, Hans Rademacher was able to improve on Hardy and Ramanujan's results by providing a convergent series expression for . It is
The proof of Rademacher's formula involves Ford circles, Farey sequences, modular symmetry and the Dedekind eta function.
It may be shown that the th term of Rademacher's series is of the order
so that the first term gives the Hardy–Ramanujan asymptotic approximation.
published an elementary proof of the asymptotic formula for .
Techniques for implementing the Hardy–Ramanujan–Rademacher formula efficiently on a computer are discussed by , who shows that can be computed in time for any . This is near-optimal in that it matches the number of digits of the result. The largest value of the partition function computed exactly is , which has slightly more than 11 billion digits.
Strict partition function
Definition and properties
A partition in which no part occurs more than once is called strict, or is said to be a partition into distinct parts. The function q(n) gives the number of these strict partitions of the given sum n. For example, q(3) = 2 because the partitions 3 and 1 + 2 are strict, while the third partition 1 + 1 + 1 of 3 has repeated parts. The number q(n) is also equal to the number of partitions of n in which only odd summands are permitted.
Generating function
The generating function for the numbers q(n) is given by a simple infinite product:
where the notation represents the Pochhammer symbol From this formula, one may easily obtain the first few terms :
This series may also be written in terms of theta functions as
where
and
In comparison, the generating function of the regular partition numbers p(n) has this identity with respect to the theta function:
Identities about strict partition numbers
Following identity is valid for the Pochhammer products:
From this identity follows that formula:
Therefore those two formulas are valid for the synthesis of the number sequence p(n):
In the following, two examples are accurately executed:
Restricted partition function
More generally, it is possible to consider partitions restricted to only elements of a subset A of the natural numbers (for example a restriction on the maximum value of the parts), or with a restriction on the number of parts or the maximum difference between parts. Each particular restriction gives rise to an associated partition function with specific properties. Some common examples are given below.
Euler and Glaisher's theorem
Two important examples are the partitions restricted to only odd integer parts or only even integer parts, with the corresponding partition functions often denoted and .
A theorem from Euler shows that the number of strict partitions is equal to the number of partitions with only odd parts: for all n, . This is generalized as Glaisher's theorem, which states that the number of partitions with no more than d-1 repetitions of any part is equal to the number of partitions with no part divisible by d.
Gaussian binomial coefficient
If we denote the number of partitions of n in at most M parts, with each part smaller or equal to N, then the generating function of is the following Gaussian binomial coefficient:
Asymptotics
Some general results on the asymptotic properties of restricted partition functions are known. If pA(n) is the partition function of partitions restricted to only elements of a subset A of the natural numbers, then:
If A possesses positive natural density α then , with
and conversely if this asymptotic property holds for pA(n) then A has natural density α. This result was stated, with a sketch of proof, by Erdős in 1942.
If A is a finite set, this analysis does not apply (the density of a finite set is zero). If A has k elements whose greatest common divisor is 1, then
References
External links
First 4096 values of the partition function
Arithmetic functions
Integer sequences
Integer partitions | Partition function (number theory) | [
"Mathematics"
] | 2,152 | [
"Sequences and series",
"Integer sequences",
"Mathematical structures",
"Recreational mathematics",
"Arithmetic functions",
"Mathematical objects",
"Combinatorics",
"Integer partitions",
"Numbers",
"Number theory"
] |
312,252 | https://en.wikipedia.org/wiki/Partition%20function%20%28statistical%20mechanics%29 | In physics, a partition function describes the statistical properties of a system in thermodynamic equilibrium. Partition functions are functions of the thermodynamic state variables, such as the temperature and volume. Most of the aggregate thermodynamic variables of the system, such as the total energy, free energy, entropy, and pressure, can be expressed in terms of the partition function or its derivatives. The partition function is dimensionless.
Each partition function is constructed to represent a particular statistical ensemble (which, in turn, corresponds to a particular free energy). The most common statistical ensembles have named partition functions. The canonical partition function applies to a canonical ensemble, in which the system is allowed to exchange heat with the environment at fixed temperature, volume, and number of particles. The grand canonical partition function applies to a grand canonical ensemble, in which the system can exchange both heat and particles with the environment, at fixed temperature, volume, and chemical potential. Other types of partition functions can be defined for different circumstances; see partition function (mathematics) for generalizations. The partition function has many physical meanings, as discussed in Meaning and significance.
Canonical partition function
Definition
Initially, let us assume that a thermodynamically large system is in thermal contact with the environment, with a temperature T, and both the volume of the system and the number of constituent particles are fixed. A collection of this kind of system comprises an ensemble called a canonical ensemble. The appropriate mathematical expression for the canonical partition function depends on the degrees of freedom of the system, whether the context is classical mechanics or quantum mechanics, and whether the spectrum of states is discrete or continuous.
Classical discrete system
For a canonical ensemble that is classical and discrete, the canonical partition function is defined as
where
is the index for the microstates of the system;
is Euler's number;
is the thermodynamic beta, defined as where is the Boltzmann constant;
is the total energy of the system in the respective microstate.
The exponential factor is otherwise known as the Boltzmann factor.
Classical continuous system
In classical mechanics, the position and momentum variables of a particle can vary continuously, so the set of microstates is actually uncountable. In classical statistical mechanics, it is rather inaccurate to express the partition function as a sum of discrete terms. In this case we must describe the partition function using an integral rather than a sum. For a canonical ensemble that is classical and continuous, the canonical partition function is defined as
where
is the Planck constant;
is the thermodynamic beta, defined as ;
is the Hamiltonian of the system;
is the canonical position;
is the canonical momentum.
To make it into a dimensionless quantity, we must divide it by h, which is some quantity with units of action (usually taken to be the Planck constant).
Classical continuous system (multiple identical particles)
For a gas of identical classical non-interacting particles in three dimensions, the partition function is
where
is the Planck constant;
is the thermodynamic beta, defined as ;
is the index for the particles of the system;
is the Hamiltonian of a respective particle;
is the canonical position of the respective particle;
is the canonical momentum of the respective particle;
is shorthand notation to indicate that and are vectors in three-dimensional space.
is the classical continuous partition function of a single particle as given in the previous section.
The reason for the factorial factor N! is discussed below. The extra constant factor introduced in the denominator was introduced because, unlike the discrete form, the continuous form shown above is not dimensionless. As stated in the previous section, to make it into a dimensionless quantity, we must divide it by h3N (where h is usually taken to be the Planck constant).
Quantum mechanical discrete system
For a canonical ensemble that is quantum mechanical and discrete, the canonical partition function is defined as the trace of the Boltzmann factor:
where:
is the trace of a matrix;
is the thermodynamic beta, defined as ;
is the Hamiltonian operator.
The dimension of is the number of energy eigenstates of the system.
Quantum mechanical continuous system
For a canonical ensemble that is quantum mechanical and continuous, the canonical partition function is defined as
where:
is the Planck constant;
is the thermodynamic beta, defined as ;
is the Hamiltonian operator;
is the canonical position;
is the canonical momentum.
In systems with multiple quantum states s sharing the same energy Es, it is said that the energy levels of the system are degenerate. In the case of degenerate energy levels, we can write the partition function in terms of the contribution from energy levels (indexed by j) as follows:
where gj is the degeneracy factor, or number of quantum states s that have the same energy level defined by Ej = Es.
The above treatment applies to quantum statistical mechanics, where a physical system inside a finite-sized box will typically have a discrete set of energy eigenstates, which we can use as the states s above. In quantum mechanics, the partition function can be more formally written as a trace over the state space (which is independent of the choice of basis):
where is the quantum Hamiltonian operator. The exponential of an operator can be defined using the exponential power series.
The classical form of Z is recovered when the trace is expressed in terms of coherent states and when quantum-mechanical uncertainties in the position and momentum of a particle are regarded as negligible. Formally, using bra–ket notation, one inserts under the trace for each degree of freedom the identity:
where is a normalised Gaussian wavepacket centered at position x and momentum p. Thus
A coherent state is an approximate eigenstate of both operators and , hence also of the Hamiltonian , with errors of the size of the uncertainties. If and can be regarded as zero, the action of reduces to multiplication by the classical Hamiltonian, and reduces to the classical configuration integral.
Connection to probability theory
For simplicity, we will use the discrete form of the partition function in this section. Our results will apply equally well to the continuous form.
Consider a system S embedded into a heat bath B. Let the total energy of both systems be E. Let pi denote the probability that the system S is in a particular microstate, i, with energy Ei. According to the fundamental postulate of statistical mechanics (which states that all attainable microstates of a system are equally probable), the probability pi will be inversely proportional to the number of microstates of the total closed system (S, B) in which S is in microstate i with energy Ei. Equivalently, pi will be proportional to the number of microstates of the heat bath B with energy :
Assuming that the heat bath's internal energy is much larger than the energy of S (), we can Taylor-expand to first order in Ei and use the thermodynamic relation , where here , are the entropy and temperature of the bath respectively:
Thus
Since the total probability to find the system in some microstate (the sum of all pi) must be equal to 1, we know that the constant of proportionality must be the normalization constant, and so, we can define the partition function to be this constant:
Calculating the thermodynamic total energy
In order to demonstrate the usefulness of the partition function, let us calculate the thermodynamic value of the total energy. This is simply the expected value, or ensemble average for the energy, which is the sum of the microstate energies weighted by their probabilities:
or, equivalently,
Incidentally, one should note that if the microstate energies depend on a parameter λ in the manner
then the expected value of A is
This provides us with a method for calculating the expected values of many microscopic quantities. We add the quantity artificially to the microstate energies (or, in the language of quantum mechanics, to the Hamiltonian), calculate the new partition function and expected value, and then set λ to zero in the final expression. This is analogous to the source field method used in the path integral formulation of quantum field theory.
Relation to thermodynamic variables
In this section, we will state the relationships between the partition function and the various thermodynamic parameters of the system. These results can be derived using the method of the previous section and the various thermodynamic relations.
As we have already seen, the thermodynamic energy is
The variance in the energy (or "energy fluctuation") is
The heat capacity is
In general, consider the extensive variable X and intensive variable Y where X and Y form a pair of conjugate variables. In ensembles where Y is fixed (and X is allowed to fluctuate), then the average value of X will be:
The sign will depend on the specific definitions of the variables X and Y. An example would be X = volume and Y = pressure. Additionally, the variance in X will be
In the special case of entropy, entropy is given by
where A is the Helmholtz free energy defined as , where is the total energy and S is the entropy, so that
Furthermore, the heat capacity can be expressed as
Partition functions of subsystems
Suppose a system is subdivided into N sub-systems with negligible interaction energy, that is, we can assume the particles are essentially non-interacting. If the partition functions of the sub-systems are ζ1, ζ2, ..., ζN, then the partition function of the entire system is the product of the individual partition functions:
If the sub-systems have the same physical properties, then their partition functions are equal, ζ1 = ζ2 = ... = ζ, in which case
However, there is a well-known exception to this rule. If the sub-systems are actually identical particles, in the quantum mechanical sense that they are impossible to distinguish even in principle, the total partition function must be divided by a N! (N factorial):
This is to ensure that we do not "over-count" the number of microstates. While this may seem like a strange requirement, it is actually necessary to preserve the existence of a thermodynamic limit for such systems. This is known as the Gibbs paradox.
Meaning and significance
It may not be obvious why the partition function, as we have defined it above, is an important quantity. First, consider what goes into it. The partition function is a function of the temperature T and the microstate energies E1, E2, E3, etc. The microstate energies are determined by other thermodynamic variables, such as the number of particles and the volume, as well as microscopic quantities like the mass of the constituent particles. This dependence on microscopic variables is the central point of statistical mechanics. With a model of the microscopic constituents of a system, one can calculate the microstate energies, and thus the partition function, which will then allow us to calculate all the other thermodynamic properties of the system.
The partition function can be related to thermodynamic properties because it has a very important statistical meaning. The probability Ps that the system occupies microstate s is
Thus, as shown above, the partition function plays the role of a normalizing constant (note that it does not depend on s), ensuring that the probabilities sum up to one:
This is the reason for calling Z the "partition function": it encodes how the probabilities are partitioned among the different microstates, based on their individual energies. Other partition functions for different ensembles divide up the probabilities based on other macrostate variables. As an example: the partition function for the isothermal-isobaric ensemble, the generalized Boltzmann distribution, divides up probabilities based on particle number, pressure, and temperature. The energy is replaced by the characteristic potential of that ensemble, the Gibbs Free Energy. The letter Z stands for the German word Zustandssumme, "sum over states". The usefulness of the partition function stems from the fact that the macroscopic thermodynamic quantities of a system can be related to its microscopic details through the derivatives of its partition function. Finding the partition function is also equivalent to performing a Laplace transform of the density of states function from the energy domain to the β domain, and the inverse Laplace transform of the partition function reclaims the state density function of energies.
Grand canonical partition function
We can define a grand canonical partition function for a grand canonical ensemble, which describes the statistics of a constant-volume system that can exchange both heat and particles with a reservoir. The reservoir has a constant temperature T, and a chemical potential μ.
The grand canonical partition function, denoted by , is the following sum over microstates
Here, each microstate is labelled by , and has total particle number and total energy . This partition function is closely related to the grand potential, , by the relation
This can be contrasted to the canonical partition function above, which is related instead to the Helmholtz free energy.
It is important to note that the number of microstates in the grand canonical ensemble may be much larger than in the canonical ensemble, since here we consider not only variations in energy but also in particle number. Again, the utility of the grand canonical partition function is that it is related to the probability that the system is in state :
An important application of the grand canonical ensemble is in deriving exactly the statistics of a non-interacting many-body quantum gas (Fermi–Dirac statistics for fermions, Bose–Einstein statistics for bosons), however it is much more generally applicable than that. The grand canonical ensemble may also be used to describe classical systems, or even interacting quantum gases.
The grand partition function is sometimes written (equivalently) in terms of alternate variables as
where is known as the absolute activity (or fugacity) and is the canonical partition function.
See also
Partition function (mathematics)
Partition function (quantum field theory)
Virial theorem
Widom insertion method
References
Equations of physics | Partition function (statistical mechanics) | [
"Physics",
"Mathematics"
] | 2,910 | [
"Equations of physics",
"Mathematical objects",
"Equations",
"Partition functions",
"Statistical mechanics"
] |
312,255 | https://en.wikipedia.org/wiki/Partition%20function%20%28quantum%20field%20theory%29 | In quantum field theory, partition functions are generating functionals for correlation functions, making them key objects of study in the path integral formalism. They are the imaginary time versions of statistical mechanics partition functions, giving rise to a close connection between these two areas of physics. Partition functions can rarely be solved for exactly, although free theories do admit such solutions. Instead, a perturbative approach is usually implemented, this being equivalent to summing over Feynman diagrams.
Generating functional
Scalar theories
In a -dimensional field theory with a real scalar field and action , the partition function is defined in the path integral formalism as the functional
where is a fictitious source current. It acts as a generating functional for arbitrary n-point correlation functions
The derivatives used here are functional derivatives rather than regular derivatives since they are acting on functionals rather than regular functions. From this it follows that an equivalent expression for the partition function reminiscent to a power series in source currents is given by
In curved spacetimes there is an added subtlety that must be dealt with due to the fact that the initial vacuum state need not be the same as the final vacuum state. Partition functions can also be constructed for composite operators in the same way as they are for fundamental fields. Correlation functions of these operators can then be calculated as functional derivatives of these functionals. For example, the partition function for a composite operator is given by
Knowing the partition function completely solves the theory since it allows for the direct calculation of all of its correlation functions. However, there are very few cases where the partition function can be calculated exactly. While free theories do admit exact solutions, interacting theories generally do not. Instead the partition function can be evaluated at weak coupling perturbatively, which amounts to regular perturbation theory using Feynman diagrams with insertions on the external legs. The symmetry factors for these types of diagrams differ from those of correlation functions since all external legs have identical insertions that can be interchanged, whereas the external legs of correlation functions are all fixed at specific coordinates and are therefore fixed.
By performing a Wick transformation, the partition function can be expressed in Euclidean spacetime as
where is the Euclidean action and are Euclidean coordinates. This form is closely connected to the partition function in statistical mechanics, especially since the Euclidean Lagrangian is usually bounded from below in which case it can be interpreted as an energy density. It also allows for the interpretation of the exponential factor as a statistical weight for the field configurations, with larger fluctuations in the gradient or field values leading to greater suppression. This connection with statistical mechanics also lends additional intuition for how correlation functions should behave in a quantum field theory.
General theories
Most of the same principles of the scalar case hold for more general theories with additional fields. Each field requires the introduction of its own fictitious current, with antiparticle fields requiring their own separate currents. Acting on the partition function with a derivative of a current brings down its associated field from the exponential, allowing for the construction of arbitrary correlation functions. After differentiation, the currents are set to zero when correlation functions in a vacuum state are desired, but the currents can also be set to take on particular values to yield correlation functions in non-vanishing background fields.
For partition functions with Grassmann valued fermion fields, the sources are also Grassmann valued. For example, a theory with a single Dirac fermion requires the introduction of two Grassmann currents and so that the partition function is
Functional derivatives with respect to give fermion fields while derivatives with respect to give anti-fermion fields in the correlation functions.
Thermal field theories
A thermal field theory at temperature is equivalent in Euclidean formalism to a theory with a compactified temporal direction of length . Partition functions must be modified appropriately by imposing periodicity conditions on the fields and the Euclidean spacetime integrals
This partition function can be taken as the definition of the thermal field theory in imaginary time formalism. Correlation functions are acquired from the partition function through the usual functional derivatives with respect to currents
Free theories
The partition function can be solved exactly in free theories by completing the square in terms of the fields. Since a shift by a constant does not affect the path integral measure, this allows for separating the partition function into a constant of proportionality arising from the path integral, and a second term that only depends on the current. For the scalar theory this yields
where is the position space Feynman propagator
This partition function fully determines the free field theory.
In the case of a theory with a single free Dirac fermion, completing the square yields a partition function of the form
where is the position space Dirac propagator
References
Further reading
Ashok Das, Field Theory: A Path Integral Approach, 2nd edition, World Scientific (Singapore, 2006); paperback .
Kleinert, Hagen, Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 4th edition, World Scientific (Singapore, 2004); paperback (also available online: PDF-files).
Jean Zinn-Justin (2009), Scholarpedia, 4(2): 8674.
Quantum field theory | Partition function (quantum field theory) | [
"Physics"
] | 1,038 | [
"Quantum field theory",
"Quantum mechanics"
] |
312,266 | https://en.wikipedia.org/wiki/Water%20pollution | Water pollution (or aquatic pollution) is the contamination of water bodies, with a negative impact on their uses. It is usually a result of human activities. Water bodies include lakes, rivers, oceans, aquifers, reservoirs and groundwater. Water pollution results when contaminants mix with these water bodies. Contaminants can come from one of four main sources. These are sewage discharges, industrial activities, agricultural activities, and urban runoff including stormwater. Water pollution may affect either surface water or groundwater. This form of pollution can lead to many problems. One is the degradation of aquatic ecosystems. Another is spreading water-borne diseases when people use polluted water for drinking or irrigation. Water pollution also reduces the ecosystem services such as drinking water provided by the water resource.
Sources of water pollution are either point sources or non-point sources. Point sources have one identifiable cause, such as a storm drain, a wastewater treatment plant, or an oil spill. Non-point sources are more diffuse. An example is agricultural runoff. Pollution is the result of the cumulative effect over time. Pollution may take many forms. One would is toxic substances such as oil, metals, plastics, pesticides, persistent organic pollutants, and industrial waste products. Another is stressful conditions such as changes of pH, hypoxia or anoxia, increased temperatures, excessive turbidity, or changes of salinity). The introduction of pathogenic organisms is another. Contaminants may include organic and inorganic substances. A common cause of thermal pollution is the use of water as a coolant by power plants and industrial manufacturers.
Control of water pollution requires appropriate infrastructure and management plans as well as legislation. Technology solutions can include improving sanitation, sewage treatment, industrial wastewater treatment, agricultural wastewater treatment, erosion control, sediment control and control of urban runoff (including stormwater management).
Definition
A practical definition of water pollution is: "Water pollution is the addition of substances or energy forms that directly or indirectly alter the nature of the water body in such a manner that negatively affects its legitimate uses." Water is typically referred to as polluted when it is impaired by anthropogenic contaminants. Due to these contaminants, it either no longer supports a certain human use, such as drinking water, or undergoes a marked shift in its ability to support its biotic communities, such as fish.
Contaminants
Contaminants with an origin in sewage
The following compounds can all reach water bodies via raw sewage or even treated sewage discharges:
Various chemical compounds found in personal hygiene and cosmetic products.
Disinfection by-products found in chemically disinfected drinking water (whilst these chemicals can be a pollutant in the water distribution network, they are fairly volatile and therefore not usually found in environmental waters).
Hormones (from animal husbandry and residue from human hormonal contraception methods) and synthetic materials such as phthalates that mimic hormones in their action. These can have adverse impacts even at very low concentrations on the natural biota and potentially on humans if the water is treated and utilized for drinking water.
Insecticides and herbicides, often from agricultural runoff.
Pathogens like Hepatovirus A (HAV may be present in treated wastewater outflows and receiving water bodies but is largely removed during further treatment of drinking water)
Inadequately treated wastewater can convey nutrients, pathogens, heterogenous suspended solids and organic fecal matter.
Pathogens
Bacteria, viruses, protozoans and parasitic worms are examples of pathogens that can be found in wastewater. In practice, indicator organisms are used to investigate pathogenic pollution of water because the detection of pathogenic organisms in water sample is difficult and costly, because of their low concentrations. The indicators (bacterial indicator) of fecal contamination of water samples most commonly used are total coliforms (TC) or fecal coliforms (FC), the latter also referred to as thermotolerant coliforms, such as Escherichia coli.
Pathogens can produce waterborne diseases in either human or animal hosts. Some microorganisms sometimes found in contaminated surface waters that have caused human health problems include Burkholderia pseudomallei, Cryptosporidium parvum, Giardia lamblia, Salmonella, norovirus and other viruses, and parasitic worms including the Schistosoma type.
The source of high levels of pathogens in water bodies can be from human feces (due to open defecation), sewage, blackwater, or manure that has found its way into the water body. The cause for this can be lack of sanitation procedures or poorly functioning on-site sanitation systems (septic tanks, pit latrines), sewage treatment plants without disinfection steps, sanitary sewer overflows and combined sewer overflows (CSOs) during storm events and intensive agriculture (poorly managed livestock operations).
Organic compounds
Organic substances that enter water bodies are often toxic.
Petroleum hydrocarbons, including fuels (gasoline, diesel fuel, jet fuels, and fuel oil) and lubricants (motor oil), and fuel combustion byproducts, from oil spills or storm water runoff
Volatile organic compounds, such as improperly stored industrial solvents. Problematic species are organochlorides such as polychlorinated biphenyl (PCBs) and trichloroethylene, a common solvent.
Per- and polyfluoroalkyl substances (PFAS) are persistent organic pollutants.
Inorganic contaminants
Inorganic water pollutants include:
Ammonia from food processing waste
Heavy metals from motor vehicles (via urban storm water runoff) and acid mine drainage
Nitrates and phosphates, from sewage and agriculture (see nutrient pollution)
Silt (sediment) in runoff from construction sites or sewage, logging, slash and burn practices land clearing sites
Salt: Freshwater salinization is the process of salty runoff contaminating freshwater ecosystems. Human-induced salinization is termed as secondary salinization, with the use of de-icing road salts as the most common form of runoff.
Pharmaceutical pollutants
Environmental persistent pharmaceutical pollutants, which can include various pharmaceutical drugs and their metabolites (see also drug pollution), such as antidepressant drugs, antibiotics or the contraceptive pill.
Metabolites of illicit drugs (see also wastewater epidemiology), for example methamphetamine and ecstasy.
Solid waste and plastics
Solid waste can enter water bodies through untreated sewage, combined sewer overflows, urban runoff, people discarding garbage into the environment, wind carrying municipal solid waste from landfills and so forth. This results in macroscopic pollution– large visible items polluting the water– but also microplastics pollution that is not directly visible. The terms marine debris and marine plastic pollution are used in the context of pollution of oceans.
Microplastics persist in the environment at high levels, particularly in aquatic and marine ecosystems, where they cause water pollution. 35% of all ocean microplastics come from textiles/clothing, primarily due to the erosion of polyester, acrylic, or nylon-based clothing, often during the washing process.
Stormwater, untreated sewage and wind are the primary conduits for microplastics from land to sea. Synthetic fabrics, tyres, and city dust are the most common sources of microplastics. These three sources account for more than 80% of all microplastic contamination.
Types of surface water pollution
Surface water pollution includes pollution of rivers, lakes and oceans. A subset of surface water pollution is marine pollution which affects the oceans. Nutrient pollution refers to contamination by excessive inputs of nutrients.
Globally, about 4.5 billion people do not have safely managed sanitation as of 2017, according to an estimate by the Joint Monitoring Programme for Water Supply and Sanitation. Lack of access to sanitation is concerning and often leads to water pollution, e.g. via the practice of open defecation: during rain events or floods, the human feces are moved from the ground where they were deposited into surface waters. Simple pit latrines may also get flooded during rain events.
As of 2022, Europe and Central Asia account for around 16% of global microplastics discharge into the seas, and although management of plastic waste and its recycling is improving globally, the absolute amount of plastic pollution continues to increase unabated due to the large amount of plastic that is being produced and disposed of. Even if sea plastic pollution were to stop entirely, microplastic contamination of the surface ocean would be projected to continue to increase.
Marine pollution
Nutrient pollution
Thermal pollution
Elevated water temperatures decrease oxygen levels (due to lower levels of dissolved oxygen, as gases are less soluble in warmer liquids), which can kill fish (which may then rot) and alter food chain composition, reduce species biodiversity, and foster invasion by new thermophilic species.
Biological pollution
The introduction of aquatic invasive organisms is a form of water pollution as well. It causes biological pollution.
Groundwater pollution
In many areas of the world, groundwater pollution poses a hazard to the wellbeing of people and ecosystems. One-quarter of the world's population depends on groundwater for drinking, yet concentrated recharging is known to carry short-lived contaminants into carbonate aquifers and jeopardize the purity of those waters.
Pollution from point sources
Point source water pollution refers to contaminants that enter a waterway from a single, identifiable source, such as a pipe or ditch. Examples of sources in this category include discharges from a sewage treatment plant, a factory, or a city storm drain.
The U.S. Clean Water Act (CWA) defines point source for regulatory enforcement purposes (see United States regulation of point source water pollution). The CWA definition of point source was amended in 1987 to include municipal storm sewer systems, as well as industrial storm water, such as from construction sites.
Sewage
Sewage typically consists of 99.9% water and 0.1% solids. Sewage contributes many classes of nutrients that lead to Eutrophication. It is a major source of phosphate for example. Sewage is often contaminated with diverse compounds found in personal hygiene, cosmetics, pharmaceutical drugs (see also drug pollution), and their metabolites Water pollution due to environmental persistent pharmaceutical pollutants can have wide-ranging consequences. When sewers overflow during storm events this can lead to water pollution from untreated sewage. Such events are called sanitary sewer overflows or combined sewer overflows.
Industrial wastewater
Industrial processes that use water also produce wastewater. This is called industrial wastewater. Using the US as an example, the main industrial consumers of water (using over 60% of the total consumption) are power plants, petroleum refineries, iron and steel mills, pulp and paper mills, and food processing industries. Some industries discharge chemical wastes, including solvents and heavy metals (which are toxic) and other harmful pollutants.
Industrial wastewater could add the following pollutants to receiving water bodies if the wastewater is not treated and managed properly:
Heavy metals, including mercury, lead, and chromium
Organic matter and nutrients such as food waste: Certain industries (e.g. food processing, slaughterhouse waste, paper fibers, plant material, etc.) discharge high concentrations of BOD, ammonia nitrogen and oil and grease.
Inorganic particles such as sand, grit, metal particles, rubber residues from tires, ceramics, etc.;
Toxins such as pesticides, poisons, herbicides, etc.
Pharmaceuticals, endocrine disrupting compounds, hormones, perfluorinated compounds, siloxanes, drugs of abuse and other hazardous substances
Microplastics such as polyethylene and polypropylene beads, polyester and polyamide
Thermal pollution from power stations and industrial manufacturers
Radionuclides from uranium mining, processing nuclear fuel, operating nuclear reactors, or disposal of radioactive waste.
Some industrial discharges include persistent organic pollutants such as per- and polyfluoroalkyl substances (PFAS).
Oil spills
Pollution from nonpoint sources
Agriculture
Agriculture is a major contributor to water pollution from nonpoint sources. The use of fertilizers as well as surface runoff from farm fields, pastures and feedlots leads to nutrient pollution. In addition to plant-focused agriculture, fish-farming is also a source of pollution. Additionally, agricultural runoff often contains high levels of pesticides.
Atmospheric contributions (air pollution)
Air deposition is a process whereby air pollutants from industrial or natural sources settle into water bodies. The deposition may lead to polluted water near the source, or at distances up to a few thousand miles away. The most frequently observed water pollutants resulting from industrial air deposition are sulfur compounds, nitrogen compounds, mercury compounds, other heavy metals, and some pesticides and industrial by-products. Natural sources of air deposition include forest fires and microbial activity.
Acid rain is caused by emissions of sulfur dioxide and nitrogen oxide, which react with the water molecules in the atmosphere to produce acids. Some governments have made efforts since the 1970s to reduce the release of sulfur dioxide and nitrogen oxide into the atmosphere. The main source of sulfur and nitrogen compounds that result in acid rain are anthropogenic, but nitrogen oxides can also be produced naturally by lightning strikes and sulphur dioxide is produced by volcanic eruptions. Acid rain can have harmful effects on plants, aquatic ecosystems and infrastructure.
Carbon dioxide concentrations in the atmosphere have increased since the 1850s due anthropogenic influences (emissions of greenhouse gases). This leads to ocean acidification and is another form of water pollution from atmospheric contributions.
Sampling, measurements, analysis
Water pollution may be analyzed through several broad categories of methods: physical, chemical and biological. Some methods may be conducted in situ, without sampling, such as temperature. Others involve collection of samples, followed by specialized analytical tests in the laboratory. Standardized, validated analytical test methods, for water and wastewater samples have been published.
Common physical tests of water include temperature, Specific conductance or electrical conductance (EC) or conductivity, solids concentrations (e.g., total suspended solids (TSS)) and turbidity. Water samples may be examined using analytical chemistry methods. Many published test methods are available for both organic and inorganic compounds. Frequently used parameters that are quantified are pH, BOD, chemical oxygen demand (COD), dissolved oxygen (DO), total hardness, nutrients (nitrogen and phosphorus compounds, e.g. nitrate and orthophosphates), metals (including copper, zinc, cadmium, lead and mercury), oil and grease, total petroleum hydrocarbons (TPH), surfactants and pesticides.
The use of a biomonitor or bioindicator is described as biological monitoring. This refers to the measurement of specific properties of an organism to obtain information on the surrounding physical and chemical environment. Biological testing involves the use of plant, animal or microbial indicators to monitor the health of an aquatic ecosystem. They are any biological species or group of species whose function, population, or status can reveal what degree of ecosystem or environmental integrity is present. One example of a group of bio-indicators are the copepods and other small water crustaceans that are present in many water bodies. Such organisms can be monitored for changes (biochemical, physiological, or behavioral) that may indicate a problem within their ecosystem.
Impacts
Ecosystems
Water pollution is a major global environmental problem because it can result in the degradation of all aquatic ecosystems – fresh, coastal, and ocean waters. The specific contaminants leading to pollution in water include a wide spectrum of chemicals, pathogens, and physical changes such as elevated temperature. While many of the chemicals and substances that are regulated may be naturally occurring (calcium, sodium, iron, manganese, etc.) the concentration usually determines what is a natural component of water and what is a contaminant. High concentrations of naturally occurring substances can have negative impacts on aquatic flora and fauna. Oxygen-depleting substances may be natural materials such as plant matter (e.g. leaves and grass) as well as human-made chemicals. Other natural and anthropogenic substances may cause turbidity (cloudiness) which blocks light and disrupts plant growth, and clogs the gills of some fish species.
Public health and waterborne diseases
A study published in 2017 stated that "polluted water spread gastrointestinal diseases and parasitic infections and killed 1.8 million people" (these are also referred to as waterborne diseases). Persistent exposure to pollutants through water are environmental health hazards, which can increase the likelihood for one to develop cancer or other diseases.
Eutrophication from nitrogen pollution
Nitrogen pollution can cause eutrophication, especially in lakes. Eutrophication is an increase in the concentration of chemical nutrients in an ecosystem to an extent that increases the primary productivity of the ecosystem. Subsequent negative environmental effects such as anoxia (oxygen depletion) and severe reductions in water quality may occur. This can harm fish and other animal populations.
Ocean acidification
Ocean acidification is another impact of water pollution. Ocean acidification is the ongoing decrease in the pH value of the Earth's oceans, caused by the uptake of carbon dioxide () from the atmosphere.
Prevalence
Water pollution is a problem in developing countries as well as in developed countries.
By country
For example, water pollution in India and China is widespread. About 90 percent of the water in the cities of China is polluted.
Control and reduction
Pollution control philosophy
One aspect of environmental protection is mandatory regulations, which are only part of the solution. Other important tools in pollution control include environmental education, economic instruments, market forces, and stricter enforcement. Standards can be "precise" (for a defined quantifiable minimum or maximum value for a pollutant), or "imprecise" which would require the use of Best available technology (BAT) or Best practicable environmental option (BPEO). Market-based economic instruments for pollution control can include charges, subsidies, deposit or refund schemes, the creation of a market in pollution credits, and enforcement incentives.
Moving towards a holistic approach in chemical pollution control combines the following approaches: Integrated control measures, trans-boundary considerations, complementary and supplementary control measures, life-cycle considerations, the impacts of chemical mixtures.
Control of water pollution requires appropriate infrastructure and management plans. The infrastructure may include wastewater treatment plants, for example sewage treatment plants and industrial wastewater treatment plants. Agricultural wastewater treatment for farms, and erosion control at construction sites can also help prevent water pollution. Effective control of urban runoff includes reducing speed and quantity of flow.
Water pollution requires ongoing evaluation and revision of water resource policy at all levels (international down to individual aquifers and wells).
Sanitation and sewage treatment
Municipal wastewater can be treated by centralized sewage treatment plants, decentralized wastewater systems, nature-based solutions or in onsite sewage facilities and septic tanks. For example, waste stabilization ponds can be a low cost treatment option for sewage. UV light (sunlight) can be used to degrade some pollutants in waste stabilization ponds (sewage lagoons). The use of safely managed sanitation services would prevent water pollution caused by lack of access to sanitation.
Well-designed and operated systems (i.e., with secondary treatment stages or more advanced tertiary treatment) can remove 90 percent or more of the pollutant load in sewage. Some plants have additional systems to remove nutrients and pathogens. While such advanced treatment techniques will undoubtedly reduce the discharges of micropollutants, they can also result in large financial costs, as well as environmentally undesirable increases in energy consumption and greenhouse gas emissions.
Sewer overflows during storm events can be addressed by timely maintenance and upgrades of the sewerage system. In the US, cities with large combined systems have not pursued system-wide separation projects due to the high cost, but have implemented partial separation projects and green infrastructure approaches. In some cases municipalities have installed additional CSO storage facilities or expanded sewage treatment capacity.
Industrial wastewater treatment
Agricultural wastewater treatment
Management of erosion and sediment control
Sediment from construction sites can be managed by installation of erosion controls, such as mulching and hydroseeding, and sediment controls, such as sediment basins and silt fences. Discharge of toxic chemicals such as motor fuels and concrete washout can be prevented by use of spill prevention and control plans, and specially designed containers (e.g. for concrete washout) and structures such as overflow controls and diversion berms.
Erosion caused by deforestation and changes in hydrology (soil loss due to water runoff) also results in loss of sediment and, potentially, water pollution.
Control of urban runoff (storm water)
Legislation
Philippines
In the Philippines, Republic Act 9275, otherwise known as the Philippine Clean Water Act of 2004, is the governing law on wastewater management. It states that it is the country's policy to protect, preserve and revive the quality of its fresh, brackish and marine waters, for which wastewater management plays a particular role.
United Kingdom
In 2024, The Royal Academy of Engineering released a study into the effects wastewater on public health in the United Kingdom. The study gained media attention, with comments from the UKs leading health professionals, including Sir Chris Whitty. Outlining 15 recommendations for various UK bodies to dramatically reduce public health risks by increasing the water quality in its waterways, such as rivers and lakes.
After the release of the report, The Guardian newspaper interviewed Whitty, who stated that improving water quality and sewage treatment should be a high level of importance and a "public health priority". He compared it to eradicating cholera in the 19th century in the country following improvements to the sewage treatment network. The study also identified that low water flows in rivers saw high concentration levels of sewage, as well as times of flooding or heavy rainfall. While heavy rainfall had always been associated with sewage overflows into streams and rivers, the British media went as far to warn parents of the dangers of paddling in shallow rivers during warm weather.
Whitty's comments came after the study revealed that the UK was experiencing a growth in the number of people that were using coastal and inland waters recreationally. This could be connected to a growing interest in activities such as open water swimming or other water sports. Despite this growth in recreation, poor water quality meant some were becoming unwell during events. Most notably, the 2024 Paris Olympics had to delay numerous swimming-focused events like the triathlon due to high levels of sewage in the River Seine.
United States
See also
Aquatic toxicology
Human impacts on the environment
Phytoremediation
Pollution
Trophic state index (water quality indicator for lakes)
VOC contamination of groundwater
Water resources management
Water security
References
External links
Tackling global water pollution – UN Environment Programme
Aquatic ecology
Aquifers
Environmental science
Water and the environment
Water supply
Sanitation
Articles containing video clips | Water pollution | [
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 4,717 | [
"Hydrology",
"Water pollution",
"Aquifers",
"Ecosystems",
"nan",
"Environmental engineering",
"Aquatic ecology",
"Water supply"
] |
312,268 | https://en.wikipedia.org/wiki/Huntingdon%20Life%20Sciences | Huntingdon Life Sciences (HLS) was a contract research organisation (CRO) organized in Maryland and headquartered in East Millstone, New Jersey. It was founded in 1951 in Cambridgeshire, England. It had two laboratories in the United Kingdom and one in the United States. With over 1,600 employees, it was the largest non-clinical CRO in Europe and the third-largest non-clinical CRO in the world. In September 2015, Huntingdon Life Sciences, Harlan Laboratories, GFA, NDA Analytics and LSR associates merged into Envigo (now Inotiv).
HLS provided contract research organization services in pre-clinical and non-clinical biological safety evaluation research. As with other major CROs operating in this business area, its major business is serving the pharmaceutical industry. However, more than a third of its business came from non-pharmaceutical sources, such as the crop protection industry which accounts for around 60% of its non-pharmaceutical business.
HLS had two facilities in the UK (Huntingdon, Cambridgeshire and Eye, Suffolk), one in the USA (East Millstone, New Jersey) and an office in Japan (Tokyo).
The company was one of the largest participants in the international primate trade and has been criticized for its animal testing practices, most specifically animal testing on non-human primates as well as on beagles. The Stop Huntingdon Animal Cruelty campaign was formed with the goal of shutting down the company due to animal rights violations.
History
Huntingdon Life Sciences was founded in the UK in 1951 as Nutrition Research Co. Ltd., a commercial organisation that initially focused on nutrition, veterinary, and biochemical research. The original facilities were split over two locations; the main offices were within Cromwell House in the town of Huntingdon; and the main laboratories were at the Hartford Field Station, just over a mile away. It then became involved with pharmaceuticals, food additives, and industrial and consumer chemicals. In 1959 it changed its name to Nutritional Research Unit Ltd. The company benefited in the early 1960s from increased government regulatory testing requirements, especially in the pharmaceutical industry. In 1964, it was acquired by Becton Dickinson.
In April 1983, Becton Dickinson created Huntingdon Research Centre PLC. It then offered four million American depositary receipts (ADRs) for sale at $15 each, representing the company's entire interest in Huntingdon. In 1985, as it began to expand its operations, the company changed its name to Huntingdon International Holdings plc. That year, it established Huntingdon Analytical Services Inc. to conduct business in the United States.
To augment its CRO business, Huntingdon acquired Minnesota's Twin City Testing Laboratory and affiliated companies in 1985, followed by the acquisition of Nebraska Testing Corporation in 1986; Travis Laboratories and Kansas City Test Laboratory Inc. in 1989; and Southwestern Laboratories, Inc. in 1990. Huntingdon also diversified its operations, primarily in the United States, becoming involved in engineering and environmental services.
In 1987, HLS acquired Northern Engineering and Testing. In 1988, it acquired Empire Soils Investigations, Chen Associates, and Asteco Inc. In 1988, HLS was floated on the London Stock Exchange and in 1989 obtained a listing on the New York Stock Exchange. In 1990, Huntingdon acquired the St. Louis branch of Envirodyne Engineers and Whiteley Holdings. In 1991, it acquired Austin Research Engineers, followed by Travers Morgan.
By the early 1990s, Huntingdon was organised into three business groups: the Life Sciences Group, the Engineering/Environmental Group, and the Travers Morgan Group, which offered engineering and environmental consulting services outside of the United States. However, only the Life Sciences Group showed long-term promise. Travers Morgan was allowed to lapse into insolvency, control passed into other hands, and Huntingdon wrote off the investment. In 1995, the engineering and environmental businesses were sold to Maxim Engineers of Dallas, Texas.
To bolster its CRO business and reinforce its U.S. presence, in 1995, Huntingdon acquired the toxicology business of Applied Biosciences International for $32.5 million in cash, plus the Leicester Clinical Research Centre. The deal included a U.S. laboratory located near Princeton, New Jersey, as well as two British facilities. In 1997, Huntingdon International Holdings changed its name to Huntingdon Life Sciences Group. The U.K. subsidiary, Huntingdon Research Centre, changed its name to Huntingdon Life Sciences, while the U.S. business operated as Huntingdon Life Sciences Inc.
In 2002, HLS moved its financial centre to the United States and incorporated in Maryland as Life Sciences Research.
In 2009, HLS was acquired.
In September 2015, Huntingdon Life Sciences, Harlan Laboratories, GFA, NDA Analytics and LSR associates merged into Envigo (now Inotiv).
Staff numbers
The latest available public figures from 2008 show that HLS employs more than 1,600 staff across all of its facilities. They break down as:
Trade bodies and associations
Association of the British Pharmaceutical Industry (ABPI)
Bioindustry Association (BIA)
Association for Assessment and Accreditation of Laboratory Animal Care (AAALAC)
Society of Environmental Toxicology and Chemistry (SETAC)
Fund for the Replacement of Animals in Medical Experiments (FRAME)
Institute of Animal Technology (IAT)
Understanding Animal Research (UAR)
Honours and awards
Agrow Awards Best Supporting Role 2007
Queens Award for Export Achievement 1982.
Use of animals
HLS uses animals in the biomedical research it conducts for its customers. The most recent numbers released state that in the UK around 60,000 animals are used annually. This number is broken down by species:
Controversies
Huntingdon is criticised by animal rights and animal welfare groups for using animals in research, for instances of animal abuse and for the wide range of substances it tests on animals, particularly non-medical products. It is claimed by SHAC that 500 animals died every day at HLS (182,500 a year), a figure at odds with HLS' published numbers.
Huntingdon's labs were infiltrated by undercover animal rights activists in 1997 in the UK and in 1998 in the US.
In 1997, film secretly recorded inside HLS in the UK by BUAV and subsequently broadcast on Channel 4 television as "It's a Dog's Life", showed serious breaches of animal-protection laws, including a beagle puppy being held up by the scruff of the neck and repeatedly punched in the face, and animals being taunted.
The laboratory technicians responsible were suspended from HLS the day after the broadcast. All three were later dismissed. Two of the men seen hitting and shaking dogs were found guilty under the Protection of Animals Act 1911 of "cruelly terrifying dogs." It was the first time laboratory technicians had been prosecuted for animal cruelty in the UK. HLS admitted that the technicians' behaviour was deplorable and a new management team was introduced the following year which, according to The Daily Telegraph, "introduced greater openness and new training methods."
In 1998, an undercover investigator for People for the Ethical Treatment of Animals (PETA) used a camera hidden in her glasses to make 50 hours of videotape of the HLS laboratories in Princeton, New Jersey. She also made four 90-minute audiotapes, photocopied 8,000 company documents, and copied the company's client list. According to PETA some of the film she shot showed a monkey being dissected while still alive and conscious. The president of HLS in New Jersey, Alan Staple, said the monkey was alive but sedated during the dissection.
A 2001 article from The Resurgence Trust stated that HLS obtained a "gagging order" in the US that prevents PETA from publicising or talking about any of the information that they discovered. The order also prevented PETA from communicating with the American Department of Agriculture, which had been going to investigate the evidence.
Protests and intimidation
The Stop Huntingdon Animal Cruelty (SHAC) campaign is based in the UK and US, and has aimed to close the company down since 1999. According to its website, the campaign's methods are restricted to non-violent direct action, as well as lobbying and demonstrations. It targets not only HLS itself, but any company, institution, or person allegedly doing business with the laboratory, whether as clients, suppliers, or even disposal and cleaning services, and the employees of those companies.
Despite its stated non-violent position, SHAC members have been convicted of crimes of violence against HLS employees. On 25 October 2010 five SHAC members received prison sentences for threatening HLS staff. SHAC has also been accused of encouraging arson and violent assault. An HLS director was assaulted in front of his child. HLS managing director Brian Cass was sent a mousetrap primed with razor blades, and in February 2001 was attacked by three men armed with pickaxe handles and CS gas. Another businessman with links to HLS was attacked and knocked unconscious adjacent to a barn his assailants had set alight.
Both SHAC and Animal Liberation Front activists have been alleged to have been engaged in harassment and intimidation, including issuing hoax bomb threats and death threats. In 2003, Daniel Andreas San Diego was accused by the American FBI of "ecoterrorism" in support of SHAC in the San Francisco Area; however, there is some question whether his "terrorist plot" was an entrapment operation by the American FBI. In 2008 seven of SHAC's senior members were described by prosecutors as "some of the key figures in the Animal Liberation Front" and found guilty of conspiracy to blackmail HLS.
Effect of campaign
The campaign against HLS led to its share price crashing, the Royal Bank of Scotland closing its bank account, and the British government arranging for the Bank of England to give them an account. In 2000, HLS was dropped from the New York Stock Exchange because of its market capitalization had fallen below NYSE limits.
Government response
From 2006, The Daily Telegraph reports, the British Government took the decision to tackle "the problem of animal rights extremism." On 1 May 2007, a police campaign called Operation Achilles was enacted against SHAC, a series of raids involving 700 police officers in England, Amsterdam, and Belgium. In total, 32 people linked to the group were arrested, and seven leading members of SHAC, including Greg Avery, were found guilty of blackmail. Police estimated in 2007 that, as a consequence of the operation, "up to three quarters of the most violent activists" were jailed. Der Spiegel writes that the number of attacks on HLS and their business declined drastically but "the movement is by no means dead."
References
Animal testing
Animal rights
Companies based in Cambridgeshire
Companies formerly listed on the London Stock Exchange
Huntingdon
Life sciences industry
Toxicology in the United Kingdom | Huntingdon Life Sciences | [
"Chemistry",
"Biology",
"Environmental_science"
] | 2,208 | [
"Animal testing",
"Toxicology in the United Kingdom",
"Toxicology",
"Life sciences industry"
] |
312,293 | https://en.wikipedia.org/wiki/Liouville%27s%20theorem%20%28complex%20analysis%29 | In complex analysis, Liouville's theorem, named after Joseph Liouville (although the theorem was first proven by Cauchy in 1844), states that every bounded entire function must be constant. That is, every holomorphic function for which there exists a positive number such that for all is constant. Equivalently, non-constant holomorphic functions on have unbounded images.
The theorem is considerably improved by Picard's little theorem, which says that every entire function whose image omits two or more complex numbers must be constant.
Statement
Liouville's theorem: Every holomorphic function for which there exists a positive number such that for all is constant.
More succinctly, Liouville's theorem states that every bounded entire function must be constant.
Proof
This important theorem has several proofs.
A standard analytical proof uses the fact that holomorphic functions are analytic.
Another proof uses the mean value property of harmonic functions.
The proof can be adapted to the case where the harmonic function is merely bounded above or below. See Harmonic function#Liouville's theorem.
Corollaries
Fundamental theorem of algebra
There is a short proof of the fundamental theorem of algebra using Liouville's theorem.
No entire function dominates another entire function
A consequence of the theorem is that "genuinely different" entire functions cannot dominate each other, i.e. if and are entire, and everywhere, then for some complex number . Consider that for the theorem is trivial so we assume . Consider the function . It is enough to prove that can be extended to an entire function, in which case the result follows by Liouville's theorem. The holomorphy of is clear except at points in . But since is bounded and all the zeroes of are isolated, any singularities must be removable. Thus can be extended to an entire bounded function which by Liouville's theorem implies it is constant.
If f is less than or equal to a scalar times its input, then it is linear
Suppose that is entire and , for . We can apply Cauchy's integral formula; we have that
where is the value of the remaining integral. This shows that is bounded and entire, so it must be constant, by Liouville's theorem. Integrating then shows that is affine and then, by referring back to the original inequality, we have that the constant term is zero.
Non-constant elliptic functions cannot be defined on the complex plane
The theorem can also be used to deduce that the domain of a non-constant elliptic function cannot be . Suppose it was. Then, if and are two periods of such that is not real, consider the parallelogram whose vertices are 0, , , and . Then the image of is equal to . Since is continuous and is compact, is also compact and, therefore, it is bounded. So, is constant.
The fact that the domain of a non-constant elliptic function cannot be is what Liouville actually proved, in 1847, using the theory of elliptic functions. In fact, it was Cauchy who proved Liouville's theorem.
Entire functions have dense images
If is a non-constant entire function, then its image is dense in . This might seem to be a much stronger result than Liouville's theorem, but it is actually an easy corollary. If the image of is not dense, then there is a complex number and a real number
such that the open disk centered at with radius has no element of the image of . Define
Then is a bounded entire function, since for all ,
So, is constant, and therefore is constant.
On compact Riemann surfaces
Any holomorphic function on a compact Riemann surface is necessarily constant.
Let be holomorphic on a compact Riemann surface . By compactness, there is a point where attains its maximum. Then we can find a chart from a neighborhood of to the unit disk such that is holomorphic on the unit disk and has a maximum at , so it is constant, by the maximum modulus principle.
Remarks
Let be the one-point compactification of the complex plane . In place of holomorphic functions defined on regions in , one can consider regions in . Viewed this way, the only possible singularity for entire functions, defined on , is the point . If an entire function is bounded in a neighborhood of , then is a removable singularity of , i.e. cannot blow up or behave erratically at . In light of the power series expansion, it is not surprising that Liouville's theorem holds.
Similarly, if an entire function has a pole of order at —that is, it grows in magnitude comparably to in some neighborhood of
—then is a polynomial. This extended version of Liouville's theorem can be more precisely stated: if for sufficiently large, then is a polynomial of degree at most . This can be proved as follows. Again take the Taylor series representation of ,
The argument used during the proof using Cauchy estimates shows that for all ,
So, if , then
Therefore, .
Liouville's theorem does not extend to the generalizations of complex numbers known as double numbers and dual numbers.
See also
Mittag-Leffler's theorem
References
External links
Theorems in complex analysis
Articles containing proofs
holomorphic functions | Liouville's theorem (complex analysis) | [
"Mathematics"
] | 1,105 | [
"Articles containing proofs",
"Theorems in mathematical analysis",
"Theorems in complex analysis"
] |
312,299 | https://en.wikipedia.org/wiki/Avalanche%20photodiode | An avalanche photodiode (APD) is a highly sensitive type of photodiode, which in general are semiconductor diodes that convert light into electricity via interband excitation coupled with impact ionization. APDs use materials and a structure optimised for operating with high reverse bias, approaching the reverse breakdown voltage, such that charge carriers generated by the photoelectric effect are multiplied by an avalanche breakdown; thus they can be used to detect relatively small amounts of light.
From a functional standpoint, they can be regarded as the semiconductor analog of photomultiplier tubes; unlike solar cells, they are not optimised for generating electricity from light but rather for detection of incoming photons. Typical applications for APDs are laser rangefinders, long-range fiber-optic telecommunication, positron emission tomography, and particle physics.
History
The avalanche photodiode was invented by Japanese engineer Jun-ichi Nishizawa in 1952. However, study of avalanche breakdown, micro-plasma defects in silicon and germanium and the investigation of optical detection using p-n junctions predate this patent.
Principle of operation
Photodiodes generally operate by impact ionization, whereby a photon provides the energy to separate charge carriers in the semiconductor material into a positive and negative pair, which can thus cause a charge flow through the diode. By applying a high reverse bias voltage, any photoelectric effect in the diode can be multiplied by the avalanche effect. Thus, the APD can be thought of as applying a high gain effect to the induced photocurrent.
In general, the higher the reverse voltage, the higher the gain. A standard silicon APD typically can sustain 100–200 V of reverse bias before breakdown, leading to a gain factor of around 100. However, by employing alternative doping and bevelling (structural) techniques compared to traditional APDs, a it is possible to create designs where greater voltage can be applied (> 1500 V) before breakdown is reached, and hence a greater operating gain (> 1000) is achieved.
Among the various expressions for the APD multiplication factor (M), an instructive expression is given by the formula
where L is the space-charge boundary for electrons, and is the multiplication coefficient for electrons (and holes). This coefficient has a strong dependence on the applied electric field strength, temperature, and doping profile. Since APD gain varies strongly with the applied reverse bias and temperature, it is necessary to closely monitor the reverse voltage to keep a stable gain.
Geiger mode counting
If very high gain is needed (105 to 106), detectors related to APDs called SPADs (single-photon avalanche diodes) can be used and operated with a reverse voltage above a typical APD's breakdown voltage. In this case, the photodetector needs to have its signal current limited and quickly diminished. Active and passive current-quenching techniques have been used for this purpose. SPADs that operate in this high-gain regime are sometimes referred to being in Geiger mode. This mode is particularly useful for single-photon detection, provided that the dark count event rate and afterpulsing probability are sufficiently low.
Materials
In principle, any semiconductor material can be used as a multiplication region:
Silicon will detect in the visible and near infrared, with low multiplication noise (excess noise).
Germanium (Ge) will detect infrared out to a wavelength of 1.7 μm, but has high multiplication noise.
InGaAs will detect out to longer than 1.6 μm and has less multiplication noise than Ge. It is normally used as the absorption region of a heterostructure diode, most typically involving InP as a substrate and as a multiplication layer. This material system is compatible with an absorption window of roughly 0.9–1.7 μm. InGaAs exhibits a high absorption coefficient at the wavelengths appropriate to high-speed telecommunications using optical fibers, so only a few micrometres of InGaAs are required for nearly 100% light absorption. The excess noise factor is low enough to permit a gain-bandwidth product in excess of 100 GHz for a simple InP/InGaAs system, and up to 400 GHz for InGaAs on silicon. Therefore, high-speed operation is possible: commercial devices are available to speeds of at least 10 Gbit/s.
Gallium-nitride–based diodes have been used for operation with ultraviolet light.
HgCdTe-based diodes operate in the infrared, typically at wavelengths up to about 14 μm, but require cooling to reduce dark currents. Very low excess noise can be achieved in this material system.
Structure
APDs are often not constructed as simple p-n junctions but have more complex designs such as p+-i-p-n+.
Performance limits
APD applicability and usefulness depends on many parameters. Two of the larger factors are: quantum efficiency, which indicates how well incident optical photons are absorbed and then used to generate primary charge carriers; and total leakage current, which is the sum of the dark current, photocurrent and noise. Electronic dark-noise components are series and parallel noise. Series noise, which is the effect of shot noise, is basically proportional to the APD capacitance, while the parallel noise is associated with the fluctuations of the APD bulk and surface dark currents.
Gain noise, excess noise factor
Another noise source is the excess noise factor, ENF. It is a multiplicative correction applied to the noise that describes the increase in the statistical noise, specifically Poisson noise, due to the multiplication process. The ENF is defined for any device, such as photomultiplier tubes, silicon solid-state photomultipliers, and APDs, that multiplies a signal, and is sometimes referred to as "gain noise". At a gain M, it is denoted by ENF(M) and can often be expressed as
where is the ratio of the hole impact ionization rate to that of electrons. For an electron multiplication device it is given by the hole impact ionization rate divided by the electron impact ionization rate. It is desirable to have a large asymmetry between these rates to minimize ENF(M), since ENF(M) is one of the main factors that limit, among other things, the best possible energy resolution obtainable.
Conversion noise, Fano factor
The noise term for an APD may also contain a Fano factor, which is a multiplicative correction applied to the Poisson noise associated with the conversion of the energy deposited by a charged particle to the electron-hole pairs, which is the signal before multiplication. The correction factor describes the decrease in the noise, relative to Poisson statistics, due to the uniformity of conversion process and the absence of, or weak coupling to, bath states in the conversion process. In other words, an "ideal" semiconductor would convert the energy of the charged particle into an exact and reproducible number of electron hole pairs to conserve energy; in reality, however, the energy deposited by the charged particle is divided into the generation of electron hole pairs, the generation of sound, the generation of heat, and the generation of damage or displacement. The existence of these other channels introduces a stochastic process, where the amount of energy deposited into any single process varies from event to event, even if the amount of energy deposited is the same.
Further influences
The underlying physics associated with the excess noise factor (gain noise) and the Fano factor (conversion noise) is very different. However, the application of these factors as multiplicative corrections to the expected Poisson noise is similar. In addition to excess noise, there are limits to device performance associated with the capacitance, transit times and avalanche multiplication time. The capacitance increases with increasing device area and decreasing thickness. The transit times (both electrons and holes) increase with increasing thickness, implying a tradeoff between capacitance and transit time for performance. The avalanche multiplication time times the gain is given to first order by the gain-bandwidth product, which is a function of the device structure and most especially .
See also
Avalanche diode
Avalanche breakdown
Single-photon avalanche diode
References
Further reading
Avalanche photodiode – A User Guide
Avalanche Photodiode – Low noise APD receivers
gh
Selecting the right APD
Pulsed Laserdiodes and Avalanche Photodiodes for Industrial Applications
Excelitas Technologies Photonic Detectors
Optical devices
Optical diodes
Particle detectors
Photodetectors
Japanese inventions | Avalanche photodiode | [
"Materials_science",
"Technology",
"Engineering"
] | 1,744 | [
"Glass engineering and science",
"Particle detectors",
"Optical devices",
"Measuring instruments"
] |
312,301 | https://en.wikipedia.org/wiki/Liouville%27s%20theorem%20%28Hamiltonian%29 | In physics, Liouville's theorem, named after the French mathematician Joseph Liouville, is a key theorem in classical statistical and Hamiltonian mechanics. It asserts that the phase-space distribution function is constant along the trajectories of the system—that is that the density of system points in the vicinity of a given system point traveling through phase-space is constant with time. This time-independent density is in statistical mechanics known as the classical a priori probability.
Liouville's theorem applies to conservative systems, that is, systems in which the effects of friction are absent or can be ignored. The general mathematical formulation for such systems is the measure-preserving dynamical system. Liouville's theorem applies when there are degrees of freedom that can be interpreted as positions and momenta; not all measure-preserving dynamical systems have these, but Hamiltonian systems do. The general setting for conjugate position and momentum coordinates is available in the mathematical setting of symplectic geometry. Liouville's theorem ignores the possibility of chemical reactions, where the total number of particles may change over time, or where energy may be transferred to internal degrees of freedom. There are extensions of Liouville's theorem to cover these various generalized settings, including stochastic systems.
Liouville equation
The Liouville equation describes the time evolution of the phase space distribution function. Although the equation is usually referred to as the "Liouville equation", Josiah Willard Gibbs was the first to recognize the importance of this equation as the fundamental equation of statistical mechanics. It is referred to as the Liouville equation because its derivation for non-canonical systems utilises an identity first derived by Liouville in 1838.
Consider a Hamiltonian dynamical system with canonical coordinates and conjugate momenta , where . Then the phase space distribution determines the probability that the system will be found in the infinitesimal phase space volume . The Liouville equation governs the evolution of in time :
Time derivatives are denoted by dots, and are evaluated according to Hamilton's equations for the system. This equation demonstrates the conservation of density in phase space (which was Gibbs's name for the theorem). Liouville's theorem states that
The distribution function is constant along any trajectory in phase space.
A proof of Liouville's theorem uses the n-dimensional divergence theorem. This proof is based on the fact that the evolution of obeys an 2n-dimensional version of the continuity equation:
That is, the 3-tuple is a conserved current. Notice that the difference between this and Liouville's equation are the terms
where is the Hamiltonian, and where the derivatives and have been evaluated using Hamilton's equations of motion. That is, viewing the motion through phase space as a 'fluid flow' of system points, the theorem that the convective derivative of the density, , is zero follows from the equation of continuity by noting that the 'velocity field' in phase space has zero divergence (which follows from Hamilton's relations).
Other formulations
Poisson bracket
The theorem above is often restated in terms of the Poisson bracket as
or, in terms of the linear Liouville operator or Liouvillian,
as
Ergodic theory
In ergodic theory and dynamical systems, motivated by the physical considerations given so far, there is a corresponding result also referred to as Liouville's theorem. In Hamiltonian mechanics, the phase space is a smooth manifold that comes naturally equipped with a smooth measure (locally, this measure is the 6n-dimensional Lebesgue measure). The theorem says this smooth measure is invariant under the Hamiltonian flow. More generally, one can describe the necessary and sufficient condition under which a smooth measure is invariant under a flow. The Hamiltonian case then becomes a corollary.
Symplectic geometry
We can also formulate Liouville's Theorem in terms of symplectic geometry. For a given system, we can consider the phase space of a particular Hamiltonian as a manifold endowed with a symplectic 2-form
The volume form of our manifold is the top exterior power of the symplectic 2-form, and is just another representation of the measure on the phase space described above.
On our phase space symplectic manifold we can define a Hamiltonian vector field generated by a function as
Specifically, when the generating function is the Hamiltonian itself, , we get
where we utilized Hamilton's equations of motion and the definition of the chain rule.
In this formalism, Liouville's Theorem states that the Lie derivative of the volume form is zero along the flow generated by . That is, for a 2n-dimensional symplectic manifold,
In fact, the symplectic structure itself is preserved, not only its top exterior power. That is, Liouville's Theorem also gives
Quantum Liouville equation
The analog of Liouville equation in quantum mechanics describes the time evolution of a mixed state. Canonical quantization yields a quantum-mechanical version of this theorem, the von Neumann equation. This procedure, often used to devise quantum analogues of classical systems, involves describing a classical system using Hamiltonian mechanics. Classical variables are then re-interpreted as quantum operators, while Poisson brackets are replaced by commutators. In this case, the resulting equation is
where ρ is the density matrix.
When applied to the expectation value of an observable, the corresponding equation is given by Ehrenfest's theorem, and takes the form
where is an observable. Note the sign difference, which follows from the assumption that the operator is stationary and the state is time-dependent.
In the phase-space formulation of quantum mechanics, substituting the Moyal brackets for Poisson brackets in the phase-space analog of the von Neumann equation results in compressibility of the probability fluid, and thus violations of Liouville's theorem incompressibility. This, then, leads to concomitant difficulties in defining meaningful quantum trajectories.
Examples
SHO phase-space volume
Consider an -particle system in three dimensions, and focus on only the evolution of particles. Within phase space, these particles occupy an infinitesimal volume given by
We want to remain the same throughout time, so that is constant along the trajectories of the system. If we allow our particles to evolve by an infinitesimal time step , we see that each particle phase space location changes as
where and denote and respectively, and we have only kept terms linear in . Extending this to our infinitesimal hypercube , the side lengths change as
To find the new infinitesimal phase-space volume , we need the product of the above quantities. To first order in , we get the following:
So far, we have yet to make any specifications about our system. Let us now specialize to the case of -dimensional isotropic harmonic oscillators. That is, each particle in our ensemble can be treated as a simple harmonic oscillator. The Hamiltonian for this system is given by
By using Hamilton's equations with the above Hamiltonian we find that the term in parentheses above is identically zero, thus yielding
From this we can find the infinitesimal volume of phase space:
Thus we have ultimately found that the infinitesimal phase-space volume is unchanged, yielding
demonstrating that Liouville's theorem holds for this system.
The question remains of how the phase-space volume actually evolves in time. Above we have shown that the total volume is conserved, but said nothing about what it looks like. For a single particle we can see that its trajectory in phase space is given by the ellipse of constant . Explicitly, one can solve Hamilton's equations for the system and find
where and denote the initial position and momentum of the -th particle.
For a system of multiple particles, each one will have a phase-space trajectory that traces out an ellipse corresponding to the particle's energy. The frequency at which the ellipse is traced is given by the in the Hamiltonian, independent of any differences in energy. As a result, a region of phase space will simply rotate about the point with frequency dependent on . This can be seen in the animation above.
Damped harmonic oscillator
To see an example where Liouville's theorem does not apply, we can modify the equations of motion for the simple harmonic oscillator to account for the effects of friction or damping. Consider again the system of particles each in a -dimensional isotropic harmonic potential, the Hamiltonian for which is given in the previous example. This time, we add the condition that each particle experiences a frictional force , where is a positive constant dictating the amount of friction. As this is a non-conservative force, we need to extend Hamilton's equations as
Unlike the equations of motion for the simple harmonic oscillator, these modified equations do not take the form of Hamilton's equations, and therefore we do not expect Liouville's theorem to hold. Instead, as depicted in the animation in this section, a generic phase space volume will shrink as it evolves under these equations of motion.
To see this violation of Liouville's theorem explicitly, we can follow a very similar procedure to the undamped harmonic oscillator case, and we arrive again at
Plugging in our modified Hamilton's equations, we find
Calculating our new infinitesimal phase space volume, and keeping only first order in we find the following result:
We have found that the infinitesimal phase-space volume is no longer constant, and thus the phase-space density is not conserved. As can be seen from the equation as time increases, we expect our phase-space volume to decrease to zero as friction affects the system.
As for how the phase-space volume evolves in time, we will still have the constant rotation as in the undamped case. However, the damping will introduce a steady decrease in the radii of each ellipse. Again we can solve for the trajectories explicitly using Hamilton's equations, taking care to use the modified ones above. Letting for convenience, we find
where the values and denote the initial position and momentum of the -th particle.
As the system evolves the total phase-space volume will spiral in to the origin. This can be seen in the figure above.
Remarks
The Liouville equation is valid for both equilibrium and nonequilibrium systems. It is a fundamental equation of non-equilibrium statistical mechanics.
The Liouville equation is integral to the proof of the fluctuation theorem from which the second law of thermodynamics can be derived. It is also the key component of the derivation of Green–Kubo relations for linear transport coefficients such as shear viscosity, thermal conductivity or electrical conductivity.
Virtually any textbook on Hamiltonian mechanics, advanced statistical mechanics, or symplectic geometry will derive the Liouville theorem.
In plasma physics, the Vlasov equation can be interpreted as Liouville's theorem, which reduces the task of solving the Vlasov equation to that of single particle motion. By using Liouville's theorem in this way with energy or magnetic moment conservation, for example, one can determine unknown fields using known particle distribution functions, or vice versa. This method is known as Liouville mapping.
See also
Boltzmann transport equation
Reversible reference system propagation algorithm (r-RESPA)
References
Further reading
External links
Eponymous theorems of physics
Hamiltonian mechanics
Theorems in dynamical systems
Statistical mechanics theorems | Liouville's theorem (Hamiltonian) | [
"Physics",
"Mathematics"
] | 2,390 | [
"Theorems in dynamical systems",
"Mathematical theorems",
"Equations of physics",
"Theoretical physics",
"Classical mechanics",
"Statistical mechanics theorems",
"Eponymous theorems of physics",
"Hamiltonian mechanics",
"Theorems in mathematical physics",
"Dynamical systems",
"Statistical mechan... |
312,304 | https://en.wikipedia.org/wiki/Symplectomorphism | In mathematics, a symplectomorphism or symplectic map is an isomorphism in the category of symplectic manifolds. In classical mechanics, a symplectomorphism represents a transformation of phase space that is volume-preserving and preserves the symplectic structure of phase space, and is called a canonical transformation.
Formal definition
A diffeomorphism between two symplectic manifolds is called a symplectomorphism if
where is the pullback of . The symplectic diffeomorphisms from to are a (pseudo-)group, called the symplectomorphism group (see below).
The infinitesimal version of symplectomorphisms gives the symplectic vector fields. A vector field is called symplectic if
Also, is symplectic if the flow of is a symplectomorphism for every .
These vector fields build a Lie subalgebra of .
Here, is the set of smooth vector fields on , and is the Lie derivative along the vector field
Examples of symplectomorphisms include the canonical transformations of classical mechanics and theoretical physics, the flow associated to any Hamiltonian function, the map on cotangent bundles induced by any diffeomorphism of manifolds, and the coadjoint action of an element of a Lie group on a coadjoint orbit.
Flows
Any smooth function on a symplectic manifold gives rise, by definition, to a Hamiltonian vector field and the set of all such vector fields form a subalgebra of the Lie algebra of symplectic vector fields. The integration of the flow of a symplectic vector field is a symplectomorphism. Since symplectomorphisms preserve the symplectic 2-form and hence the symplectic volume form, Liouville's theorem in Hamiltonian mechanics follows. Symplectomorphisms that arise from Hamiltonian vector fields are known as Hamiltonian symplectomorphisms.
Since the flow of a Hamiltonian vector field also preserves . In physics this is interpreted as the law of conservation of energy.
If the first Betti number of a connected symplectic manifold is zero, symplectic and Hamiltonian vector fields coincide, so the notions of Hamiltonian isotopy and symplectic isotopy of symplectomorphisms coincide.
It can be shown that the equations for a geodesic may be formulated as a Hamiltonian flow, see Geodesics as Hamiltonian flows.
The group of (Hamiltonian) symplectomorphisms
The symplectomorphisms from a manifold back onto itself form an infinite-dimensional pseudogroup. The corresponding Lie algebra consists of symplectic vector fields.
The Hamiltonian symplectomorphisms form a subgroup, whose Lie algebra is given by the Hamiltonian vector fields.
The latter is isomorphic to the Lie algebra of smooth
functions on the manifold with respect to the Poisson bracket, modulo the constants.
The group of Hamiltonian symplectomorphisms of usually denoted as .
Groups of Hamiltonian diffeomorphisms are simple, by a theorem of Banyaga. They have natural geometry given by the Hofer norm. The homotopy type of the symplectomorphism group for certain simple symplectic four-manifolds, such as the product of spheres, can be computed using Gromov's theory of pseudoholomorphic curves.
Comparison with Riemannian geometry
Unlike Riemannian manifolds, symplectic manifolds are not very rigid: Darboux's theorem shows that all symplectic manifolds of the same dimension are locally isomorphic. In contrast, isometries in Riemannian geometry must preserve the Riemann curvature tensor, which is thus a local invariant of the Riemannian manifold.
Moreover, every function H on a symplectic manifold defines a Hamiltonian vector field XH, which exponentiates to a one-parameter group of Hamiltonian diffeomorphisms. It follows that the group of symplectomorphisms is always very large, and in particular, infinite-dimensional. On the other hand, the group of isometries of a Riemannian manifold is always a (finite-dimensional) Lie group. Moreover, Riemannian manifolds with large symmetry groups are very special, and a generic Riemannian manifold has no nontrivial symmetries.
Quantizations
Representations of finite-dimensional subgroups of the group of symplectomorphisms (after ħ-deformations, in general) on Hilbert spaces are called quantizations.
When the Lie group is the one defined by a Hamiltonian, it is called a "quantization by energy".
The corresponding operator from the Lie algebra to the Lie algebra of continuous linear operators is also sometimes called the quantization; this is a more common way of looking at it in physics.
Arnold conjecture
A celebrated conjecture of Vladimir Arnold relates the minimum number of fixed points for a Hamiltonian symplectomorphism , in case is a compact symplectic manifold, to Morse theory (see ). More precisely, the conjecture states that has at least as many fixed points as the number of critical points that a smooth function on must have. Certain weaker version of this conjecture has been proved: when is "nondegenerate", the number of fixed points is bounded from below by the sum of Betti numbers of (see,). The most important development in symplectic geometry triggered by this famous conjecture is the birth of Floer homology (see ), named after Andreas Floer.
In popular culture
"Symplectomorphism" is a word in a crossword puzzle in episode 1 of the anime Spy × Family.
See also
References
General
.
. See section 3.2.
Symplectomorphism groups
.
.
Symplectic topology
Hamiltonian mechanics | Symplectomorphism | [
"Physics",
"Mathematics"
] | 1,240 | [
"Hamiltonian mechanics",
"Theoretical physics",
"Classical mechanics",
"Dynamical systems"
] |
312,308 | https://en.wikipedia.org/wiki/Dirac%20sea | The Dirac sea is a theoretical model of the electron vacuum as an infinite sea of electrons with negative energy, now called positrons. It was first postulated by the British physicist Paul Dirac in 1930 to explain the anomalous negative-energy quantum states predicted by the relativistically-correct Dirac equation for electrons. The positron, the antimatter counterpart of the electron, was originally conceived of as a hole in the Dirac sea, before its experimental discovery in 1932.
In hole theory, the solutions with negative time evolution factors are reinterpreted as representing the positron, discovered by Carl Anderson. The interpretation of this result requires a Dirac sea, showing that the Dirac equation is not merely a combination of special relativity and quantum mechanics, but it also implies that the number of particles cannot be conserved.
Dirac sea theory has been displaced by quantum field theory, though they are mathematically compatible.
Origins
Similar ideas on holes in crystals had been developed by Soviet physicist Yakov Frenkel in 1926, but there is no indication the concept was discussed with Dirac when the two met in a Soviet physics congress in the summer of 1928.
The origins of the Dirac sea lie in the energy spectrum of the Dirac equation, an extension of the Schrödinger equation consistent with special relativity, an equation that Dirac had formulated in 1928. Although this equation was extremely successful in describing electron dynamics, it possesses a rather peculiar feature: for each quantum state possessing a positive energy , there is a corresponding state with energy -. This is not a big difficulty when an isolated electron is considered, because its energy is conserved and negative-energy electrons may be left out. However, difficulties arise when effects of the electromagnetic field are considered, because a positive-energy electron would be able to shed energy by continuously emitting photons, a process that could continue without limit as the electron descends into ever lower energy states. However, real electrons clearly do not behave in this way.
Dirac's solution to this was to rely on the Pauli exclusion principle. Electrons are fermions, and obey the exclusion principle, which means that no two electrons can share a single energy state within an atom. Dirac hypothesized that what we think of as the "vacuum" is actually the state in which all the negative-energy states are filled, and none of the positive-energy states. Therefore, if we want to introduce a single electron, we would have to put it in a positive-energy state, as all the negative-energy states are occupied. Furthermore, even if the electron loses energy by emitting photons it would be forbidden from dropping below zero energy.
Dirac further pointed out that a situation might exist in which all the negative-energy states are occupied except one. This "hole" in the sea of negative-energy electrons would respond to electric fields as though it were a positively charged particle. Initially, Dirac identified this hole as a proton. However, Robert Oppenheimer pointed out that an electron and its hole would be able to annihilate each other, releasing energy on the order of the electron's rest energy in the form of energetic photons; if holes were protons, stable atoms would not exist. Hermann Weyl also noted that a hole should act as though it has the same mass as an electron, whereas the proton is about two thousand times heavier. The issue was finally resolved in 1932, when the positron was discovered by Carl Anderson, with all the physical properties predicted for the Dirac hole.
Inelegance of Dirac sea
Despite its success, the idea of the Dirac sea tends not to strike people as very elegant. The existence of the sea implies an infinite negative electric charge filling all of space. In order to make any sense out of this, one must assume that the "bare vacuum" must have an infinite positive charge density which is exactly cancelled by the Dirac sea. Since the absolute energy density is unobservable—the cosmological constant aside—the infinite energy density of the vacuum does not represent a problem. Only changes in the energy density are observable. Geoffrey Landis also notes that Pauli exclusion does not definitively mean that a filled Dirac sea cannot accept more electrons, since, as Hilbert elucidated, a sea of infinite extent can accept new particles even if it is filled. This happens when we have a chiral anomaly and a gauge instanton.
The development of quantum field theory (QFT) in the 1930s made it possible to reformulate the Dirac equation in a way that treats the positron as a "real" particle rather than the absence of a particle, and makes the vacuum the state in which no particles exist instead of an infinite sea of particles. This picture recaptures all the valid predictions of the Dirac sea, such as electron-positron annihilation. On the other hand, the field formulation does not eliminate all the difficulties raised by the Dirac sea; in particular the problem of the vacuum possessing infinite energy.
Mathematical expression
Upon solving the free Dirac equation,
one finds
where
for plane wave solutions with -momentum . This is a direct consequence of the relativistic energy-momentum relation
upon which the Dirac equation is built. The quantity is a constant column vector and is a normalization constant. The quantity is called the time evolution factor, and its interpretation in similar roles in, for example, the plane wave solutions of the Schrödinger equation, is the energy of the wave (particle). This interpretation is not immediately available here since it may acquire negative values. A similar situation prevails for the Klein–Gordon equation. In that case, the absolute value of can be interpreted as the energy of the wave since in the canonical formalism, waves with negative actually have positive energy . But this is not the case with the Dirac equation. The energy in the canonical formalism associated with negative is .
Modern interpretation
The Dirac sea interpretation and the modern QFT interpretation are related by what may be thought of as a very simple Bogoliubov transformation, an identification between the creation and annihilation operators of two different free field theories. In the modern interpretation, the field operator for a Dirac spinor is a sum of creation operators and annihilation operators, in a schematic notation:
An operator with negative frequency lowers the energy of any state by an amount proportional to the frequency, while operators with positive frequency raise the energy of any state.
In the modern interpretation, the positive frequency operators add a positive energy particle, adding to the energy, while the negative frequency operators annihilate a positive energy particle, and lower the energy. For a fermionic field, the creation operator gives zero when the state with momentum k is already filled, while the annihilation operator gives zero when the state with momentum k is empty.
But then it is possible to reinterpret the annihilation operator as a creation operator for a negative energy particle. It still lowers the energy of the vacuum, but in this point of view it does so by creating a negative energy object. This reinterpretation only affects the philosophy. To reproduce the rules for when annihilation in the vacuum gives zero, the notion of "empty" and "filled" must be reversed for the negative energy states. Instead of being states with no antiparticle, these are states that are already filled with a negative energy particle.
The price is that there is a nonuniformity in certain expressions, because replacing annihilation with creation adds a constant to the negative energy particle number. The number operator for a Fermi field is:
which means that if one replaces N by 1−N for negative energy states, there is a constant shift in quantities like the energy and the charge density, quantities that count the total number of particles. The infinite constant gives the Dirac sea an infinite energy and charge density. The vacuum charge density should be zero, since the vacuum is Lorentz invariant, but this is artificial to arrange in Dirac's picture. The way it is done is by passing to the modern interpretation.
Dirac's idea is more directly applicable to solid state physics, where the valence band in a solid can be regarded as a "sea" of electrons. Holes in this sea indeed occur, and are extremely important for understanding the effects of semiconductors, though they are never referred to as "positrons". Unlike in particle physics, there is an underlying positive charge—the charge of the ionic lattice—that cancels out the electric charge of the sea.
Revival in the theory of causal fermion systems
Dirac's original concept of a sea of particles was revived in the theory of causal fermion systems, a recent proposal for a unified physical theory. In this approach, the problems of the infinite vacuum energy and infinite charge density of the Dirac sea disappear because these divergences drop out of the physical equations formulated via the causal action principle. These equations do not require a preexisting space-time, making it possible to realize the concept that space-time and all structures therein arise as a result of the collective interaction of the sea states with each other and with the additional particles and "holes" in the sea.
See also
Fermi sea
Positronium
Vacuum polarization
Virtual particle
Remarks
Notes
References
(Chapter 12 is dedicate to hole theory.)
Quantum field theory
Vacuum
Sea | Dirac sea | [
"Physics"
] | 1,936 | [
"Quantum field theory",
"Quantum mechanics",
"Vacuum",
"Matter"
] |
312,335 | https://en.wikipedia.org/wiki/Tarantula%20Nebula | The Tarantula Nebula (also known as 30 Doradus) is a large H II region in the Large Magellanic Cloud (LMC), forming its south-east corner (from Earth's perspective).
Discovery
The Tarantula Nebula was observed by Nicolas-Louis de Lacaille during an expedition to the Cape of Good Hope between 1751 and 1753. He cataloged it as the second of the "Nebulae of the First Class", "Nebulosities not accompanied by any star visible in the telescope of two feet". It was described as a diffuse nebula 20' across.
Johann Bode included the Tarantula in his 1801 Uranographia star atlas and listed it in the accompanying Allgemeine Beschreibung und Nachweisung der Gestirne catalog as number 30 in the constellation "Xiphias or Dorado". Instead of being given a stellar magnitude, it was noted to be nebulous.
The name Tarantula Nebula arose in the mid-20th century from its appearance in deep photographic exposures.
30 Doradus has often been treated as the designation of a star, or of the central star cluster NGC 2070, but is now generally treated as referring to the whole nebula area of the Tarantula Nebula.
Properties
The Tarantula Nebula has an apparent magnitude of 8. Considering its distance of about 49 kpc (160,000 light-years), this is an extremely luminous non-stellar object. Its luminosity is so great that if it were as close to Earth as the Orion Nebula, the Tarantula Nebula would cast visible shadows. In fact, it is the most active starburst region known in the Local Group of galaxies.
It is also one of the largest H II regions in the Local Group with an estimated diameter around 200 to 570 pc (650 to 1860 light years), and also because of its very large size, it is sometimes described as the largest. However, other H II regions such as NGC 604, which is in the Triangulum Galaxy, could be larger. The nebula resides on the leading edge of the LMC where ram pressure stripping, and the compression of the interstellar medium likely resulting from this, is at a maximum.
NGC 2070
30 Doradus has at its centre the star cluster NGC 2070 which includes the compact concentration of stars known as R136 that produces most of the energy that makes the nebula visible. The estimated mass of the cluster is 450,000 solar masses, suggesting it will likely become a globular cluster in the future. In addition to NGC 2070, the Tarantula Nebula contains several other star clusters including the much older Hodge 301. The most massive stars of Hodge 301 have already exploded in supernovae.
Supernova 1987A
The closest supernova observed since the invention of the telescope, Supernova 1987A, occurred in the outskirts of the Tarantula Nebula. There is a prominent supernova remnant enclosing the open cluster NGC 2060. Still, the remnants of many other supernovae are difficult to detect in the complex nebulosity.
Black hole VFTS 243
An x-ray quiet black hole was discovered in the Tarantula Nebula, the first outside of the Milky Way Galaxy that does not radiate strongly. The black hole has a mass of at least 9 solar masses and is in a circular orbit with its 25 solar mass blue giant companion VFTS 243.
See also
List of largest nebulae
NGC 604
References
External links
APOD Images: 2003 August 23 & 2010 May 18
SEDS Data: NGC 2070, The Tarantula Nebula
Hubble Space Telescope Images of: The Tarantula Nebula
European Southern Observatory Image of: The Tarantula Nebula
The Scale of the Universe (Astronomy Picture of the Day 2012 March 12)
H II regions
NGC objects
Large Magellanic Cloud
Dorado
103b
17511205
Articles containing video clips
Star-forming regions | Tarantula Nebula | [
"Astronomy"
] | 809 | [
"Dorado",
"Constellations"
] |
312,355 | https://en.wikipedia.org/wiki/Gametogenesis | Gametogenesis is a biological process by which diploid or haploid precursor cells undergo cell division and differentiation to form mature haploid gametes. Depending on the biological life cycle of the organism, gametogenesis occurs by meiotic division of diploid gametocytes into various gametes, or by mitosis. For example, plants produce gametes through mitosis in gametophytes. The gametophytes grow from haploid spores after sporic meiosis. The existence of a multicellular, haploid phase in the life cycle between meiosis and gametogenesis is also referred to as alternation of generations.
It is the biological process of gametogenesis during which cells that are haploid or diploid divide to create other cells. It can take place either through mitotic or meiotic division of diploid gametocytes into different cells depending on an organism's biological life cycle. For instance, gametophytes in plants undergo mitosis to produce gametes. Both male and female have different forms.
In animals
Animals produce gametes directly through meiosis from diploid mother cells in organs called gonads (testis in males and ovaries in females). In mammalian germ cell development, sexually dimorphic gametes differentiates into primordial germ cells from pluripotent cells during initial mammalian development. Males and females of a species that reproduce sexually have different forms of gametogenesis:
spermatogenesis (male): Immature germ cells are produced in a man's testes. To mature into sperms, males' immature germ cells, or spermatogonia, go through spermatogenesis during adolescence. Spermatogonia are diploid cells that become larger as they divide through mitosis. These are primary spermatocytes. These diploid cells undergo meiotic division to create secondary spermatocytes. These secondary spermatocytes undergo a second meiotic division to produce immature sperms or spermatids. These spermatids undergo spermiogenesis in order to develop into sperm. LH, FSH, GnRH, and androgens are just a few of the hormones that help to promote spermatogenesis.
oogenesis (female)
Stages
However, before turning into gametogonia, the embryonic development of gametes is the same in males and females.
Common path
Gametogonia are usually seen as the initial stage of gametogenesis. However, gametogonia are themselves successors of primordial germ cells (PGCs) from the dorsal endoderm of the yolk sac migrate along the hindgut to the genital ridge. They multiply by mitosis, and, once they have reached the genital ridge in the late embryonic stage, are referred to as gametogonia. Once the germ cells have developed into gametogonia, they are no longer the same between males and females.
Individual path
From gametogonia, male and female gametes develop differently - males by spermatogenesis and females by oogenesis. However, by convention, the following pattern is common for both:
Differences between spermatogenesis and oogenesis
In vitro gametogenesis
In vitro gametogenesis (IVG) is the technique of developing in vitro generated gametes, i.e., "the generation of eggs and sperm from pluripotent stem cells in a culture dish." This technique is currently feasible in mice and will likely have future success in humans and nonhuman primates. It allows scientists to create sperms and egg cells by reprograming adult cells. This way, they could grow embryos in a laboratory. Even though it is a promising technique for fighting disease, it raises several ethical problems.
In gametangia
Fungi, algae, and primitive plants form specialized haploid structures called gametangia, where gametes are produced through mitosis. In some fungi, such as the Zygomycota, the gametangia are single cells, situated on the ends of hyphae, which act as gametes by fusing into a zygote. More typically, gametangia are multicellular structures that differentiate into male and female organs:
antheridium (male)
archegonium (female)
In angiosperms
In angiosperms, the male gametes (always two) are produced inside the pollen tube (in 70% of the species) or inside the pollen grain (in 30% of the species) through the division of a generative cell into two sperm nuclei. Depending on the species, this can occur while the pollen forms in the anther (pollen tricellular) or after pollination and growth of the pollen tube (pollen bicellular in the anther and in the stigma). The female gamete is produced inside the embryo sac of the ovule.
In angiosperms the division of a generative cell into two, sperm nuclei, resulting in the production male gametes (always two), which develop inside the pollen grain (in 30% of species) or the pollen tube (in 70% of species), respectively, of the plant. This may happen before pollination and the development of the pollen tube, depending on the species, or while the pollen is still forming in the anther (pollen is tricellular) (pollen bicellular in the anther and in the stigma). Inside the embryo sac of the ovule, the female gamete is created.
Meiosis
Meiosis is a central feature of gametogenesis, but the adaptive function of meiosis is currently a matter of debate. A key event during meiosis is the pairing of homologous chromosomes and recombination (exchange of genetic information) between homologous chromosomes. This process promotes the production of increased genetic diversity among progeny and the recombinational repair of damage in the DNA to be passed on to progeny. To explain the adaptive function of meiosis (as well as of gametogenesis and the sexual cycle), some authors emphasize diversity, and others emphasize DNA repair.
Although meiosis is a crucial component of gametogenesis, its function in adaptation is still unknown. In sexually reproducing organisms, it is a type of cell division that results in fewer chromosomes being present in gametes.
HOMOLOGY EFFECTS
There are two key differences between mammalian and plant gametogenesis. First, there is no predetermined germline in plants. Male or female gametophyte-producing cells diverge from the reproductive meristem, a totipotent clump of developing cells in the adult plant that creates all the flower's features (both sexual and asexual structures). Second, meiosis is followed by mitotic divisions and differentiation to create the gametes. In plants, sister, non-gametic cells are connected to the female gametes (the egg cell and the central cell) (the synergids and the antipodal cells). The haploid microspore passes through a mitosis to create a vegetative and generative cell during male gametogenesis. The generative cell undergoes a second mitotic division, resulting in the creation of two.
Premeiotic, post meiotic, pre mitotic, or postmitotic events are all possibilities if imprints are created during male and female gametogenesis. However, if only one of the daughter cells receives parental imprints following mitosis, this would result in two functionally different female gametes or two functionally different sperm cells. Demethylation is seen in the pollen grain following the second meiosis and before to the generative cell's mitosis, as was discussed in the section before this one. Along with pollen differentiation, various structural and compositional DNA alterations also occur. These modifications are potential steps for the genome-wide erasure and/or reprogramming of the imprinting that happens in animals. During the growth of sperm cells, the male DNA is extensively demethylated in plants, whereas the converse is true in animals.
See also
In vitro spermatogenesis
Microgametogenesis
Notes
References
Developmental biology
Reproductive system
Germ cells | Gametogenesis | [
"Biology"
] | 1,676 | [
"Behavior",
"Developmental biology",
"Reproductive system",
"Sex",
"Reproduction",
"Organ systems"
] |
312,383 | https://en.wikipedia.org/wiki/Law%20of%20total%20probability | In probability theory, the law (or formula) of total probability is a fundamental rule relating marginal probabilities to conditional probabilities. It expresses the total probability of an outcome which can be realized via several distinct events, hence the name.
Statement
The law of total probability is a theorem that states, in its discrete case, if is a finite or countably infinite set of mutually exclusive and collectively exhaustive events, then for any event
or, alternatively,
where, for any , if , then these terms are simply omitted from the summation since is finite.
The summation can be interpreted as a weighted average, and consequently the marginal probability, , is sometimes called "average probability"; "overall probability" is sometimes used in less formal writings.
The law of total probability can also be stated for conditional probabilities:
Taking the as above, and assuming is an event independent of any of the :
Continuous case
The law of total probability extends to the case of conditioning on events generated by continuous random variables. Let be a probability space. Suppose is a random variable with distribution function , and an event on . Then the law of total probability states
If admits a density function , then the result is
Moreover, for the specific case where , where is a Borel set, then this yields
Example
Suppose that two factories supply light bulbs to the market. Factory X's bulbs work for over 5000 hours in 99% of cases, whereas factory Y's bulbs work for over 5000 hours in 95% of cases. It is known that factory X supplies 60% of the total bulbs available and Y supplies 40% of the total bulbs available. What is the chance that a purchased bulb will work for longer than 5000 hours?
Applying the law of total probability, we have:
where
is the probability that the purchased bulb was manufactured by factory X;
is the probability that the purchased bulb was manufactured by factory Y;
is the probability that a bulb manufactured by X will work for over 5000 hours;
is the probability that a bulb manufactured by Y will work for over 5000 hours.
Thus each purchased light bulb has a 97.4% chance to work for more than 5000 hours.
Other names
The term law of total probability is sometimes taken to mean the law of alternatives, which is a special case of the law of total probability applying to discrete random variables. One author uses the terminology of the "Rule of Average Conditional Probabilities", while another refers to it as the "continuous law of alternatives" in the continuous case. This result is given by Grimmett and Welsh as the partition theorem, a name that they also give to the related law of total expectation.
See also
Law of large numbers
Law of total expectation
Law of total variance
Law of total covariance
Law of total cumulance
Marginal distribution
Notes
References
Introduction to Probability and Statistics by Robert J. Beaver, Barbara M. Beaver, Thomson Brooks/Cole, 2005, page 159.
Theory of Statistics, by Mark J. Schervish, Springer, 1995.
Schaum's Outline of Probability, Second Edition, by John J. Schiller, Seymour Lipschutz, McGraw–Hill Professional, 2010, page 89.
A First Course in Stochastic Models, by H. C. Tijms, John Wiley and Sons, 2003, pages 431–432.
An Intermediate Course in Probability, by Alan Gut, Springer, 1995, pages 5–6.
Probability theorems
Statistical laws | Law of total probability | [
"Mathematics"
] | 696 | [
"Theorems in probability theory",
"Mathematical theorems",
"Mathematical problems"
] |
312,392 | https://en.wikipedia.org/wiki/International%20Birdman | The International Birdman was a series of English competitions held in the West Sussex towns of Bognor Regis, Selsey and Worthing. The competition involved human 'birdmen' attempting to fly off the end of a pier into the sea for prize money. The event began in 1971 and was held on piers in West Sussex, on the south coast of England. First held in Selsey, the event moved to Bognor Regis in 1978. In 2008 and 2009 the competition relocated to Worthing Pier due to renovations of Bognor Regis Pier. From 2010 Bognor Regis and Worthing have both held Birdman competition, forming the International Birdman Series, which ended in 2016. It was the longest running Birdman Rally in the world.
Format
The competition involves running off an elevated ramp of 20 to 35 feet high at the end of a pier and attempting to 'fly' the furthest distance. There was an initial prize of £1,000 for anyone who could travel beyond . Since starting, the prize money and qualifying distance has increased and in 2009 at Worthing it stood at £30,000 for reaching . The competition is divided between serious aviators mainly flying hang-gliders (Condor Class), inventors with home designed and built machines (Leonardo da Vinci Class), and people in fancy dress with little or no actual flying ability (Kingfisher Class), raising money for charity.
History
The event started in 1971 as the International Bird-Man Rally in Selsey on the coast of Sussex. The event was initiated by George Abel, as part of a fund-raising activity for the Selsey branch of the Royal Air Forces Association (RAFA) Club. Abel, a former RAF photographer, emigrated to Australia shortly afterwards, where he also helped to organise Birdman events.
In 1978 organisers were informed they could no longer use the pier at Selsey and the event was moved to Bognor Regis. By 1983 the competition had attracted European teams and the attention of the BBC. In 2008, because of the demolition of an 18 metre (60 ft) length of the end of Bognor pier, the 2008 Birdman event was not staged in Bognor, due to safety concerns over water depth at high tide being at the new end of the pier. The 2008 and 2009 competitions were held in Worthing as a result of the safety concerns. After safety checks in 2009 the water depth was cleared by the Health and Safety Executive as safe for competition. Events have subsequently been staged in both Worthing and Bognor Regis, creating the International Birdman series. Bognor's 2014 event was cancelled, but the event returned there in 2015.
Birdman competitions have also been held in Eastbourne, East Sussex.
Recent events
In February 2016 the organisers of the Worthing event announced its cancellation for the foreseeable future. Bognor Birdman took place on 3 and 4 September 2016, although the second day's flying was curtailed because of safety concerns over high winds and choppy sea. International Bognor Birdman 2021 was cancelled due to the COVID-19 pandemic, but was expected to return in 2022.
Winners
In 1984 Harold Zimmer from West Germany flew 57.8 metres to claim the top prize, which then stood at £10,000. By 1990 the record was 71 metres, the prize distance had been increased to 100 metres and the prize money was £25,000. The prize money later stood at £30,000 for successfully reaching 100m, and for three consecutive years – 2013, 2014 and 2015 – that record was broken at Worthing.
Reviews
From 1994 until 2001, in Brighton, Eastbourne and Bognor Regis, Dod Miller immortalised these human birds with his Rolleiflex camera. Supermen, witches, dinosaurs, butterflies, ostriches, penguins and winged species of all kinds posed for Miller ready for their take off, armed with goggles and swimming flippers. Julie Bonzon in Dod Miller's Birdmen.
Photographer Dodik "Dod" Miller (born 1960) has said that a photograph of a birdman dressed as a knight in armour jumping off a pier was his best shot. He commented:
At Bognor and Worthing ... people would launch themselves off piers strapped into homemade contraptions – often in fancy dress – and try to fly. ... I don't know who the guy in this image is. There was metalwork involved in the horse, or dragon, or whatever it was, and it was on wheels. He started out riding on top of it like a knight in shining armour ... What I like about these pictures is that these are English eccentrics hoping to fly – with all the connotations of Icarus – and the joke is that they only go down, not out ... There was an array of bird costumes, of course. You'd see hats with propellers attached, while one guy tried to get a lift by holding massive bunches of helium balloons. It wasn't all men: there were some very brave women too. Many contenders are raising money for charity – and it's quite a height they jump from. I've seen people seriously winded. There's a boat to collect them from the water. Dod Miller in The Guardian, 6 July 2022
2009 Worthing distance controversy
In 2009, Steve Elkins flew the 100m course entering the water at the finishing markers. A £30,000 prize was offered to any competitor completing the distance. Organisers said that he had fallen at 99.8m, 20 cm short of the 100m marker. However, Elkins claimed that video footage showed he had exceeded the distance. Elkins took the event organisers to court, but in February 2014 a judge ruled against him, saying that he was ‘not satisfied’ that the competitor had crossed the mark.
See also
Red Bull Flugtag
Birdman Rally
References
External links
Worthing Birdman official site
Infographic displaying Birdman statistics and history
Video of Worthing Birdman 2009
Video of Worthing Birdman 2009
Unofficial Bognor Birdman site
Bognor Birdman photos
Bognor Regis, where Birdman is currently based
Destination Selsey: Selsey Birdman competition 1975 (video)
Recurring events established in 1971
Worthing
Mechanical engineering competitions
Bognor Regis | International Birdman | [
"Engineering"
] | 1,282 | [
"Mechanical engineering competitions",
"Mechanical engineering"
] |
312,398 | https://en.wikipedia.org/wiki/Autonomous%20system%20%28mathematics%29 | In mathematics, an autonomous system or autonomous differential equation is a system of ordinary differential equations which does not explicitly depend on the independent variable. When the variable is time, they are also called time-invariant systems.
Many laws in physics, where the independent variable is usually assumed to be time, are expressed as autonomous systems because it is assumed the laws of nature which hold now are identical to those for any point in the past or future.
Definition
An autonomous system is a system of ordinary differential equations of the form
where takes values in -dimensional Euclidean space; is often interpreted as time.
It is distinguished from systems of differential equations of the form
in which the law governing the evolution of the system does not depend solely on the system's current state but also the parameter , again often interpreted as time; such systems are by definition not autonomous.
Properties
Solutions are invariant under horizontal translations:
Let be a unique solution of the initial value problem for an autonomous system
Then solves
Denoting gets and , thus
For the initial condition, the verification is trivial,
Example
The equation is autonomous, since the independent variable () does not explicitly appear in the equation.
To plot the slope field and isocline for this equation, one can use the following code in GNU Octave/MATLAB
Ffun = @(X, Y)(2 - Y) .* Y; % function f(x,y)=(2-y)y
[X, Y] = meshgrid(0:.2:6, -1:.2:3); % choose the plot sizes
DY = Ffun(X, Y); DX = ones(size(DY)); % generate the plot values
quiver(X, Y, DX, DY, 'k'); % plot the direction field in black
hold on;
contour(X, Y, DY, [0 1 2], 'g'); % add the isoclines(0 1 2) in green
title('Slope field and isoclines for f(x,y)=(2-y)y')
One can observe from the plot that the function is -invariant, and so is the shape of the solution, i.e. for any shift .
Solving the equation symbolically in MATLAB, by running
syms y(x);
equation = (diff(y) == (2 - y) * y);
% solve the equation for a general solution symbolically
y_general = dsolve(equation);
obtains two equilibrium solutions, and , and a third solution involving an unknown constant ,
-2 / (exp(C3 - 2 * x) - 1).
Picking up some specific values for the initial condition, one can add the plot of several solutions
% solve the initial value problem symbolically
% for different initial conditions
y1 = dsolve(equation, y(1) == 1); y2 = dsolve(equation, y(2) == 1);
y3 = dsolve(equation, y(3) == 1); y4 = dsolve(equation, y(1) == 3);
y5 = dsolve(equation, y(2) == 3); y6 = dsolve(equation, y(3) == 3);
% plot the solutions
ezplot(y1, [0 6]); ezplot(y2, [0 6]); ezplot(y3, [0 6]);
ezplot(y4, [0 6]); ezplot(y5, [0 6]); ezplot(y6, [0 6]);
title('Slope field, isoclines and solutions for f(x,y)=(2-y)y')
legend('Slope field', 'Isoclines', 'Solutions y_{1..6}');
text([1 2 3], [1 1 1], strcat('\leftarrow', {'y_1', 'y_2', 'y_3'}));
text([1 2 3], [3 3 3], strcat('\leftarrow', {'y_4', 'y_5', 'y_6'}));
grid on;
Qualitative analysis
Autonomous systems can be analyzed qualitatively using the phase space; in the one-variable case, this is the phase line.
Solution techniques
The following techniques apply to one-dimensional autonomous differential equations. Any one-dimensional equation of order is equivalent to an -dimensional first-order system (as described in reduction to a first-order system), but not necessarily vice versa.
First order
The first-order autonomous equation
is separable, so it can be solved by rearranging it into the integral form
Second order
The second-order autonomous equation
is more difficult, but it can be solved by introducing the new variable
and expressing the second derivative of via the chain rule as
so that the original equation becomes
which is a first order equation containing no reference to the independent variable . Solving provides as a function of . Then, recalling the definition of :
which is an implicit solution.
Special case:
The special case where is independent of
benefits from separate treatment. These types of equations are very common in classical mechanics because they are always Hamiltonian systems.
The idea is to make use of the identity
which follows from the chain rule, barring any issues due to division by zero.
By inverting both sides of a first order autonomous system, one can immediately integrate with respect to :
which is another way to view the separation of variables technique. The second derivative must be expressed as a derivative with respect to instead of :
To reemphasize: what's been accomplished is that the second derivative with respect to has been expressed as a derivative of . The original second order equation can now be integrated:
This is an implicit solution. The greatest potential problem is inability to simplify the integrals, which implies difficulty or impossibility in evaluating the integration constants.
Special case:
Using the above approach, the technique can extend to the more general equation
where is some parameter not equal to two. This will work since the second derivative can be written in a form involving a power of . Rewriting the second derivative, rearranging, and expressing the left side as a derivative:
The right will carry +/− if is even. The treatment must be different if :
Higher orders
There is no analogous method for solving third- or higher-order autonomous equations. Such equations can only be solved exactly if they happen to have some other simplifying property, for instance linearity or dependence of the right side of the equation on the dependent variable only (i.e., not its derivatives). This should not be surprising, considering that nonlinear autonomous systems in three dimensions can produce truly chaotic behavior such as the Lorenz attractor and the Rössler attractor.
Likewise, general non-autonomous equations of second order are unsolvable explicitly, since these can also be chaotic, as in a periodically forced pendulum.
Multivariate case
In , where is an -dimensional column vector dependent on .
The solution is where is an constant vector.
Finite durations
For non-linear autonomous ODEs it is possible under some conditions to develop solutions of finite duration, meaning here that from its own dynamics, the system will reach the value zero at an ending time and stay there in zero forever after. These finite-duration solutions cannot be analytical functions on the whole real line, and because they will be non-Lipschitz functions at the ending time, they don't stand uniqueness of solutions of Lipschitz differential equations.
As example, the equation:
Admits the finite duration solution:
See also
Non-autonomous system (mathematics)
References
Dynamical systems
Ordinary differential equations | Autonomous system (mathematics) | [
"Physics",
"Mathematics"
] | 1,651 | [
"Mechanics",
"Dynamical systems"
] |
312,399 | https://en.wikipedia.org/wiki/Autonomous%20system%20%28Internet%29 | An autonomous system (AS) is a collection of connected Internet Protocol (IP) routing prefixes under the control of one or more network operators on behalf of a single administrative entity or domain, that presents a common and clearly defined routing policy to the Internet. Each AS is assigned an autonomous system number (ASN), for use in Border Gateway Protocol (BGP) routing. Autonomous System Numbers are assigned to Local Internet Registries (LIRs) and end-user organizations by their respective Regional Internet Registries (RIRs), which in turn receive blocks of ASNs for reassignment from the Internet Assigned Numbers Authority (IANA). The IANA also maintains a registry of ASNs which are reserved for private use (and should therefore not be announced to the global Internet).
Originally, the definition required control by a single entity, typically an Internet service provider (ISP) or a very large organization with independent connections to multiple networks, that adhered to a single and clearly defined routing policy. In March 1996, the newer definition came into use because multiple organizations can run BGP using private AS numbers to an ISP that connects all those organizations to the Internet. Even though there may be multiple autonomous systems supported by the ISP, the Internet only sees the routing policy of the ISP. That ISP must have an officially registered ASN.
Until 2007, AS numbers were defined as 16-bit integers, which allowed for a maximum of 65,536 assignments. Since then, the IANA has begun to also assign 32-bit AS numbers to regional Internet registries (RIRs). These numbers are written preferably as simple integers, in a notation referred to as "asplain", ranging from 0 to 4,294,967,295 (hexadecimal 0xFFFF FFFF). Or, alternatively, in the form called "asdot+" which looks like x.y, where x and y are 16-bit numbers. Numbers of the form 0.y are exactly the old 16-bit AS numbers. The special 16-bit ASN 23456 ("AS_TRANS") was assigned by IANA as a placeholder for 32-bit ASN values for the case when 32-bit-ASN capable routers ("new BGP speakers") send BGP messages to routers with older BGP software ("old BGP speakers") which do not understand the new 32-bit ASNs.
The first and last ASNs of the original 16-bit integers (0 and 65,535) and the last ASN of the 32-bit numbers (4,294,967,295) are reserved and should not be used by operators; AS0 is used by all five RIRs to invalidate unallocated space. ASNs 64,496 to 64,511 of the original 16-bit range and 65,536 to 65,551 of the 32-bit range are reserved for use in documentation. ASNs 64,512 to 65,534 of the original 16-bit AS range, and 4,200,000,000 to 4,294,967,294 of the 32-bit range are reserved for Private Use.
The number of unique autonomous networks in the routing system of the Internet exceeded 5,000 in 1999, 30,000 in late 2008, 35,000 in mid-2010, 42,000 in late 2012, 54,000 in mid-2016 and 60,000 in early 2018.
The number of allocated ASNs exceeded 100,000 as of March 2021.
Assignment
AS numbers are assigned in blocks by Internet Assigned Numbers Authority (IANA) to regional Internet registries (RIRs). The appropriate RIR then assigns ASNs to entities within its designated area from the block assigned by IANA. Entities wishing to receive an ASN must complete the application process of their RIR, LIR or upstream service provider and be approved before being assigned an ASN. Current IANA ASN assignments to RIRs can be found on the IANA website. RIRs, as part of NRO, can revoke AS numbers as part of their Internet governance abilities.
There are other sources for more specific data:
APNIC: https://ftp.apnic.net/stats/apnic/
RIPE NCC: https://ftp.ripe.net/ripe/stats/
AFRINIC: https://ftp.afrinic.net/pub/stats/afrinic/
ARIN: https://ftp.arin.net/pub/stats/arin/
LACNIC: https://ftp.lacnic.net/pub/stats/lacnic/
ASN table
A complete table of available 16-bit and 32-bit ASN:
Types
Autonomous systems (AS) can be grouped into four categories, depending on their connectivity and operating policy.
multihomed: An AS that maintains connections to more than one other AS. This allows the AS to remain connected to the Internet in the event of a complete failure of one of their connections. However, unlike a transit AS, this type of AS would not allow traffic from one AS to pass through on its way to another AS.
stub: An AS that is connected to only one other AS. This may be an apparent waste of an AS number if the network's routing policy is the same as its upstream AS's. However, the stub AS may have peering with other autonomous systems that is not reflected in public route-view servers. Specific examples include private interconnections in the financial and transportation sectors.
transit: An AS that acts as a router between two ASes is called a transit. Since not all ASes are directly connected with every other AS, a transit AS carries data traffic between one AS to another AS to which it has links.
Internet Exchange Point (IX or IXP): A physical infrastructure through which ISPs or content delivery networks (CDNs) exchange Internet traffic between their networks (autonomous systems). These are often groups of local ISPs that band together to exchange data by splitting the costs of a local networking hub, avoiding the higher costs (and bandwidth charges) of a Transit AS. IXP ASNs are usually transparent. By having presence in an IXP, ASes shorten the transit path to other participating ASes, thereby reducing network latency and improving round-trip delay.
AS-SET
Autonomous systems can be included in one or more AS-SETs, for example AS-SET of RIPE NCC "AS-12655" has AS1, AS2 and AS3 as its members, but AS1 is also included in other sets in ARIN (AS-INCAPSULA) and APNIC (AS-IMCL). Another AS-SET sources can be RADB, LEVEL3 (tier 1 network now called Lumen Technologies) and also ARIN has ARIN-NONAUTH source of AS-SETs. AS-SETs are created by network operators in an Internet Routing Registry (IRR), like other route objects, and can be included in other AS-SETs and even form cycles.
AS-SET names usually start with "AS-", but can also have a hierarchical name. For example, the administrator of AS 64500 may create an AS-SET called "AS64500:AS-UPSTREAMS", to avoid conflict with other similarly named AS-SETs.
AS-SETs are often used to simplify management of published routing policies. A routing policy is published in the IRR using "import" and "export" (or the newer "mp-import" and "mp-export") attributes, which each contain the source or destination AS number and the AS number imported or exported. Instead of single AS numbers, AS-SETs can be referenced in these attributes, which simplifies management of complex routing policies.
See also
Administrative distance
INOC-DBA – a hotline communications system between the network operations centers of major Autonomous Systems
Internet Routing Registry
PeeringDB – a freely available web-based database of networks that are interested in peering
Routing Assets Database (RADB)
References
External links
RIPEstat – Internet Measurements and Analysis
Merit RADb
Hurricane Electric BGP Toolkit
PeeringDB https://www.peeringdb.com/
Robtex: Various kinds of research of IP numbers, Domain names, ASN, etc
astraceroute, an AS traceroute utility (part of netsniff-ng)
ASN FAQ
CIDR and ASN assignment report
Partial List of Autonomous system numbers
Lookin'STAT Graph: number of Autonomous systems online
Internet architecture | Autonomous system (Internet) | [
"Technology"
] | 1,780 | [
"Internet architecture",
"IT infrastructure"
] |
312,408 | https://en.wikipedia.org/wiki/Law%20of%20total%20variance | In probability theory, the law of total variance or variance decomposition formula or conditional variance formulas or law of iterated variances also known as Eve's law, states that if and are random variables on the same probability space, and the variance of is finite, then
In language perhaps better known to statisticians than to probability theorists, the two terms are the "unexplained" and the "explained" components of the variance respectively (cf. fraction of variance unexplained, explained variation). In actuarial science, specifically credibility theory, the first component is called the expected value of the process variance (EVPV) and the second is called the variance of the hypothetical means (VHM). These two components are also the source of the term "Eve's law", from the initials EV VE for "expectation of variance" and "variance of expectation".
Explanation
To understand the formula above, we need to comprehend the random variables and . These variables depend on the value of : for a given , and are constant numbers. Essentially, we use the possible values of to group the outcomes and then compute the expected values and variances for each group.
The "unexplained" component is simply the average of all the variances of within each group.
The "explained" component is the variance of the expected values, i.e., it represents the part of the variance that is explained by the variation of the average value of for each group.
For an illustration, consider the example of a dog show (a selected excerpt of Analysis_of_variance#Example). Let the random variable correspond to the dog weight and correspond to the breed. In this situation, it is reasonable to expect that the breed explains a major portion of the variance in weight since there is a big variance in the breeds' average weights. Of course, there is still some variance in weight for each breed, which is taken into account in the "unexplained" term.
Note that the "explained" term actually means "explained by the averages." If variances for each fixed (e.g., for each breed in the example above) are very distinct, those variances are still combined in the "unexplained" term.
Examples
Example 1
Five graduate students take an exam that is graded from 0 to 100. Let denote the student's grade and indicate whether the student is international or domestic. The data is summarized as follows:
Among international students, the mean is and the variance is .
Among domestic students, the mean is and the variance is .
The part of the variance of "unexplained" by is the mean of the variances for each group. In this case, it is . The part of the variance of "explained" by is the variance of the means of inside each group defined by the values of the . In this case, it is zero, since the mean is the same for each group. So the total variation is
Example 2
Suppose is a coin flip with the probability of heads being . Suppose that when then is drawn from a normal distribution with mean and standard deviation , and that when then is drawn from normal distribution with mean and standard deviation . Then the first, "unexplained" term on the right-hand side of the above formula is the weighted average of the variances, , and the second, "explained" term is the variance of the distribution that gives with probability and gives with probability .
Formulation
There is a general variance decomposition formula for components (see below). For example, with two conditioning random variables:
which follows from the law of total conditional variance:
Note that the conditional expected value is a random variable in its own right, whose value depends on the value of Notice that the conditional expected value of given the is a function of (this is where adherence to the conventional and rigidly case-sensitive notation of probability theory becomes important!). If we write then the random variable is just Similar comments apply to the conditional variance.
One special case, (similar to the law of total expectation) states that if is a partition of the whole outcome space, that is, these events are mutually exclusive and exhaustive, then
In this formula, the first component is the expectation of the conditional variance; the other two components are the variance of the conditional expectation.
Proof
Finite Case
Let be observed values of , with repetitions.
Set and, for each possible value of , set .
Note that
Summing these for , the last parcel becomes
Hence,
General Case
The law of total variance can be proved using the law of total expectation. First,
from the definition of variance. Again, from the definition of variance, and applying the law of total expectation, we have
Now we rewrite the conditional second moment of in terms of its variance and first moment, and apply the law of total expectation on the right hand side:
Since the expectation of a sum is the sum of expectations, the terms can now be regrouped:
Finally, we recognize the terms in the second set of parentheses as the variance of the conditional expectation :
General variance decomposition applicable to dynamic systems
The following formula shows how to apply the general, measure theoretic variance decomposition formula to stochastic dynamic systems. Let be the value of a system variable at time Suppose we have the internal histories (natural filtrations) , each one corresponding to the history (trajectory) of a different collection of system variables. The collections need not be disjoint. The variance of can be decomposed, for all times into components as follows:
The decomposition is not unique. It depends on the order of the conditioning in the sequential decomposition.
The square of the correlation and explained (or informational) variation
In cases where are such that the conditional expected value is linear; that is, in cases where
it follows from the bilinearity of covariance that
and
and the explained component of the variance divided by the total variance is just the square of the correlation between and that is, in such cases,
One example of this situation is when have a bivariate normal (Gaussian) distribution.
More generally, when the conditional expectation is a non-linear function of
which can be estimated as the squared from a non-linear regression of on using data drawn from the joint distribution of When has a Gaussian distribution (and is an invertible function of ), or itself has a (marginal) Gaussian distribution, this explained component of variation sets a lower bound on the mutual information:
Higher moments
A similar law for the third central moment says
For higher cumulants, a generalization exists. See law of total cumulance.
See also
− a generalization
References
(Problem 34.10(b))
Algebra of random variables
Statistical deviation and dispersion
Articles containing proofs
Theory of probability distributions
Theorems in statistics
Statistical laws | Law of total variance | [
"Mathematics"
] | 1,402 | [
"Articles containing proofs",
"Mathematical theorems",
"Mathematical problems",
"Theorems in statistics"
] |
312,430 | https://en.wikipedia.org/wiki/Background%20%28astronomy%29 | In astronomy, background commonly refers to the incoming light from an apparently empty part of the night sky.
Even if no visible astronomical objects are present in given part of the sky, there always is some low luminosity present, due mostly to light diffusion from the atmosphere (diffusion of both incoming light from nearby sources, and of man-made Earth sources like cities). In the visible band, luminosity level is around the 22nd magnitude per square-arcsecond: a very low level, but anyway well within the limits of the current generation of telescopes. The Hubble Space Telescope does not suffer from this problem.
In infrared astronomy, the problem can be much worse: due to the longer wavelengths involved, the sky and the telescope themselves are a source of light. To work around this problem, infrared telescopes often use a technique called chopping, where a mirror rapidly oscillates between the object of interest and the nearby, empty sky. The two images can be subtracted, leaving hopefully only the incoming light from the source.
There are several sources which contribute to the brightness of the (night) sky. Some of these are instrumental, or due to the presence of the atmosphere (like the airglow), in the case of ground-based instruments. Even if we able to minimize the effect of instrumental and atmospheric components (e.g. using a spacecraft), there are still several astrophysical components contributing to the sky background: these could be sets of point sources like faint asteroids, Galactic stars and far away galaxies, as well as diffuse sources like dust in the Solar System, in the Milky Way, and in the intergalactic space. The actual importance of a specific component depends mostly of the wavelength of the measurement. The uncertainty (or noise) of the measurements caused by the astrophysical components of the sky background is called confusion noise.
In astronomical CCD technology, background is usually referred to the overall optical "noise" of the system, that is, the incoming light on the CCD sensor in absence of light sources. This background can originate from electronic noise in the CCD, from not-well-masked lights nearby the telescope, and so on. An exposure on an empty patch of the sky is also called a background, and is the sum of the system background level plus the sky's one.
A background frame is often the first exposure in an astronomical observation with a CCD: the frame will then be subtracted from the actual observation result, leaving in theory only the incoming light from the astronomical object being observed.
References
Observational astronomy | Background (astronomy) | [
"Astronomy"
] | 521 | [
"Observational astronomy",
"Astronomical sub-disciplines"
] |
312,439 | https://en.wikipedia.org/wiki/Aspergillus%20niger | Aspergillus niger is a mold classified within the Nigri section of the Aspergillus genus. The Aspergillus genus consists of common molds found throughout the environment within soil and water, on vegetation, in fecal matter, on decomposing matter, and suspended in the air. Species within this genus often grow quickly and can sporulate within a few days of germination. A combination of characteristics unique to A. niger makes the microbe invaluable to the production of many acids, proteins and bioactive compounds. Characteristics including extensive metabolic diversity, high production yield, secretion capability, and the ability to conduct post-translational modifications are responsible for A. niger's robust production of secondary metabolites. A. niger's capability to withstand extremely acidic conditions makes it especially important to the industrial production of citric acid.
A. niger causes a disease known as "black mold" on certain fruits and vegetables such as grapes, apricots, onions, and peanuts, and is a common contaminant of food. It is ubiquitous in soil and is commonly found in indoor environments, where its black colonies can be confused with those of Stachybotrys (species of which have also been called "black mold"). A. niger is classified as generally recognized as safe (GRAS) by the US Food and Drug Administration for use in food production, although the microbe is capable of producing toxins that affect human health.
Taxonomy
Aspergillus niger is included in Aspergillus subgenus Circumdati, section Nigri. The section Nigri includes 15 related black-spored species that may be confused with A. niger, including A. tubingensis, A. foetidus, A. carbonarius, and A. awamori. In 2004, a number of morphologically similar species were described by Samson et al.
In 2007, the strain of ATCC 16404 Aspergillus niger was reclassified as Aspergillus brasiliensis (refer to publication by Varga et al.). This required an update to the U.S. Pharmacopoeia and the European Pharmacopoeia, which commonly use this strain throughout the pharmaceutical industry.
Cultivation
A. niger is a strict aerobe; therefore, it requires oxygen to grow. The fungus can grow in a range of environmental conditions; it can grow at temperatures ranging from 6 to 47 °C. As a mesophile, its optimal temperature range is 35-37 °C. It can tolerate pH ranging from 1.5 to 9.8. A. niger is xerophilic, meaning it can grow and reproduce in environments with very little water. It can also grow in humid conditions even tolerating environments with 90-100% relative humidity. The fungus is most commonly grown on potato dextrose agar (PDA), but it can grow on many different types of growth media including Czapek-Dox agar, lignocellulose agar, and several others.
Genome
Aspergillus niger has a genome consisting of roughly 34 megabases (Mb) organized into eight chromosomes. The DNA contains 10,785 genes which are transcribed and translated into 10,593 proteins.
Two strains of A. niger have been sequenced. Strain CBS 513.88 produces enzymes used in industrial applications while strain ATCC 1015 is the wildtype strain of ATCC 11414 used to produce industrial citric acid (CA). The A. niger ATCC 1015 genome was sequenced by the Joint Genome Institute in a collaboration with other institutions. Completed sequences have been used to uncover orthologous genes and pathways involved in fungal metabolism, specifically the catabolism of monosaccharides. The ability of A. niger to change its metabolism depending on the carbon sources and other nutrients present in its environment has enabled the microorganism to survive and be found in almost all ecosystems. Further research is being done to study these mechanisms for all fungi using the complete sequenced genome of A. niger.
Industrial uses
There are two ways in which Aspergillus niger can be grown for industrial purposes: solid state fermentation (SSF) and submerged fermentation (SmF). SSF uses a solid substrate with nutrients and minimal moisture to grow microorganisms. Nutrients such as nitrogen and carbon come from agricultural byproducts such as wheat bran, sugar pulp, rice husks, and corn flour. SSF gives better yield of microbe products and is more cost effective than SmF due to using agricultural byproducts. SSF is predominantly used over SmF. In SmF, microbes are grown in a liquid medium inside large aseptic fermentation vessels. These vessels are expensive pieces of equipment that provide more water for growth and allow for tight control of environmental factors, such as temperature and pH, that affects microbial growth.
Aspergillus niger is cultured to facilitate the industrial production of many substances. Various strains of A. niger are used in the industrial preparation of citric acid (E330) and gluconic acid (E574); therefore, they have been deemed acceptable for daily intake by the World Health Organization. A. niger fermentation is "generally recognized as safe" (GRAS) by the United States Food and Drug Administration under the Federal Food, Drug, and Cosmetic Act. A. niger is also being considered as a potential new source of natural food grade pigments.
The production of citric acid (CA) is achieved by growing strains of A. niger in a nutrient rich medium that includes high concentrations of sugar and mineral salts and an acidic pH of 2.5-3.5. Many microorganisms produce CA, but Aspergillus niger produces more than 1 million metric tons of CA annually via a fungal fermentation process. CA is in high demand for applications such as the control of microorganism growth, food and beverage flavor enhancement, acidity manipulation, pharmaceuticals, etc.
A. niger produces many useful enzymes for the catabolism of biopolymers in order to obtain nutrients from its environment. The production of specific enzymes can be increased for industrial purposes. For example, A. niger glucoamylase () is used in the production of high-fructose corn syrup and pectinases (GH28) are used in cider and wine clarification. Alpha-galactosidase (GH27), an enzyme that breaks down certain complex sugars, is a component of Beano and several other products that decrease flatulence. Another use for A. niger within the biotechnology industry is in the production of magnetic isotope-containing variants of biological macromolecules for NMR analysis. Aspergillus niger is also cultured for the extraction of the enzyme, glucose oxidase (), used in the design of glucose biosensors, due to its high affinity for β-D-glucose.
In the food industry, A. niger is also cultured to isolate the enzyme fructosyltransferase to produce fructooligosaccharides (FOS). FOS are used to manufacture low-calorie and functional foods due to FOS characteristic ability to slow growth of pathogenic microorganisms in the intestines. These foods have prebiotic fiber among other health promoting properties. A. niger is not the only organism to produce the enzyme fructosyltransferase, but it has been found to produce the enzyme at rates conducive to industrial production. A specific use of A. niger within the food industry is its capability to produce enzymes like carbohydrase and cellulase, which are commonly used in the seafood industry for removing the bellies of clams during processing and removing the tough external skin of shrimp from their edible internal tissue.
Aspergillus niger can grow in gold-mining solutions containing cyano-metal complexes with gold, silver, copper, iron, and zinc. The fungus also plays a role in the solubilization of heavy-metal sulfides. A. niger has also been shown to remediate acid mine drainage through biosorption of copper and manganese.
Toxicity
A. niger produces a wide variety of secondary metabolites, some of which are mycotoxins called ochratoxins, such as ochratoxin A. Contamination by filamentous fungi, such as A. niger, occurs frequently in grapes and grape based products resulting in contamination by ochratoxin A (OTA). OTA, a clinically relevant mycotoxin, can accumulate in human tissue and cause a variety of serious health conditions. Potential consequences of OTA poisoning include kidney damage, kidney failure and cancer but the United States Food and Drug Administration (FDA) has not set maximum permissible levels of OTA in food unlike the EU that set maximum permissible levels in a variety of food products.
Pathogenicity
Plant pathogen
Aspergillus niger can cause black mold infections in certain legumes, fruits, and vegetables such as peanuts, grapes, and onions, leading to the fungus being a common food contaminant. This filamentous ascomycete has a tolerance to changes in pH, humidity, and heat, thriving in a temperature range from . These characteristics make infections of A. niger a common cause of post-harvest decay in fruits and vegetables, which can lead to significant economic loss in the food industry. A. niger infection in plants can cause a reduction in seed germination, seedling emergence, root elongation, and shoot elongation, causing the plant to perish before maturation. Specifically, Aspergillus niger causes sooty mold on onions and ornamental plants.
Human pathogen
A. niger is pathogenic. Aspergillosis is a fungal infection caused by spores of indoor and outdoor Aspergillus mold species. Due to the ubiquitous nature of A. niger, its spores are commonly inhaled by humans from their surrounding environment. Aspergillosis infection customarily occurs in people with compromised immune systems or pre-existing lung conditions like asthma and cystic fibrosis. Types of aspergillosis include allergic bronchopulmonary aspergillosis (ABPA), allergic aspergillus sinusitis, azole-resistant aspergillus fumigatus, cutaneous (skin) aspergillosis, and chronic pulmonary aspergillosis. Out of the approximated 180 species of aspergillus molds, roughly 40 species have been found to cause health concern in immunocompromised humans. Aspergillosis is particularly frequent among horticultural workers who often inhale peat dust, which can be rich in Aspergillus niger spores. The fungus has also been found in ancient Egyptian mummies and can be inhaled when they are disturbed. Otomycosis, which is a superficial fungal infection of the ear canal, is another disorder that can be caused by overgrowth of Aspergillus molds like A. niger. Otomycosis caused by A. niger is frequently associated with mechanical damage of the ear canal's external skin barrier and often presents itself in patients living in tropical climates. A. niger is rarely reported to cause pneumonia compared to other Aspergillus species, such as Aspergillus flavus, Aspergillus fumigatus, and Aspergillus terreus.
Gallery
See also
Contamination control
References
External links
Aspergillosis information, Centers for Disease Control and Prevention, US Department of Health and Human Services
A. niger ATCC 1015 genome
Aspergillus website (Manchester University, UK)
niger
Biotechnology
Fungal grape diseases
Fungal plant pathogens and diseases
Molds used in food production
Fungi described in 1867
Fungi in cultivation
Fungus species | Aspergillus niger | [
"Biology"
] | 2,457 | [
"nan",
"Fungi",
"Fungus species",
"Biotechnology"
] |
312,562 | https://en.wikipedia.org/wiki/Scarecrow%20%28DC%20Comics%29 | The Scarecrow is a supervillain appearing in American comic books published by DC Comics. Created by writer Bill Finger and artist Bob Kane, the character first appeared in World's Finest Comics #3 (September 1941), and has become one of the superhero Batman's most enduring enemies belonging to the collective of adversaries that make up his rogues gallery.
In the DC Universe, the Scarecrow is the alias of Jonathan Crane, a professor of psychology turned criminal mastermind. Abused and bullied in his youth, he becomes obsessed with fear and develops a hallucinogenic drug—dubbed "fear toxin"—to terrorize Gotham City and exploit the phobias of its protector, Batman. As the self-proclaimed "Master of Fear", the Scarecrow's crimes do not stem from a common desire for wealth or power, but from a sadistic pleasure in subjecting others to his experiments on the manipulation of fear. An outfit symbolic of his namesake with a stitched burlap mask serves as the Scarecrow's visual motif.
The character has been adapted in various media incarnations, having been portrayed in film by Cillian Murphy in The Dark Knight Trilogy, and in television by Charlie Tahan and David W. Thompson in the Fox series Gotham, and Vincent Kartheiser in the HBO Max streaming series Titans. Henry Polic II, Jeffrey Combs, Dino Andrade, John Noble, and Robert Englund, among others, have provided the Scarecrow's voice in animation and video games.
Publication history
Batman creators Bill Finger and Bob Kane introduced the Scarecrow as a new villain in World's Finest Comics #3 (September 1941) during the Golden Age of Comic Books, in which he made only two appearances. Ichabod Crane, the protagonist of Washington Irving's The Legend of Sleepy Hollow, was used as an inspiration for the character's lanky appearance as well as his alter ego, Jonathan Crane.
Scarecrow was revived during the Silver Age of Comic Books by writer Gardner Fox and artist Sheldon Moldoff in Batman #189 (February 1967), which featured the debut of the character's signature fear-inducing hallucinogen or "fear toxin". The character remained relatively unchanged throughout the Bronze Age of Comic Books.
Following the 1986 multi-title event Crisis on Infinite Earths reboot, the character's origin story is expanded on in Batman Annual #19 and the miniseries Batman/Scarecrow: Year One, with this narrative also revealing that Crane has a fear of bats. In 2011, as a result of The New 52 reboot, Scarecrow's origin (as well as that of various other DC characters) is once again altered, incorporating several elements that differ from the original.
Fictional character biography
Backstory
Born in Georgia, Jonathan Crane is abused by his great-grandmother, and is bullied at school for his resemblance to Ichabod Crane from Washington Irving's "The Legend of Sleepy Hollow", sparking his lifelong obsession with fear and using it as a weapon against others. In his senior year, Crane is humiliated by school bully Bo Griggs and rejected by cheerleader Sherry Squires. He takes revenge during the senior prom by donning his trademark scarecrow costume and brandishing a gun in the school parking lot; in the ensuing chaos, Griggs gets into a car accident, paralyzing himself and killing Squires.
Crane's obsession with fear leads him to become a psychologist, taking a position at Arkham Asylum and performing fear-inducing experiments on his patients. He is also a professor of psychology at Gotham University, specializing in the study of phobias. He loses his job after he fires a gun inside a packed classroom, accidentally wounding a student; he takes revenge by killing the professors responsible for his termination and becomes a career criminal.
As a college professor, Crane mentors a young Thomas Elliot. The character also has a cameo in Sandman (vol. 2) #5. In stories by Jeph Loeb and Tim Sale, the Scarecrow is depicted as one of the more deranged criminals in Batman's rogues gallery, with a habit of speaking in nursery rhymes. These stories further revise his history, explaining that he was raised by his abusive, fanatically religious great-grandfather, whom he murdered as a teenager.
Criminal career
Scarecrow plays a prominent role in Doug Moench's "Terror" storyline, set in Batman's early years, where Professor Hugo Strange breaks him out of Arkham and gives him "therapy" to train him to defeat Batman. Strange's therapy proves effective enough to turn the Scarecrow against his "benefactor", impaling him on a weather vane and throwing him in the cellar of his own mansion. The Scarecrow then uses Strange's mansion to lure Batman to Crime Alley, and decapitates one of his former classmates in the alley in front of Batman. With the help of Catwoman, — whom Scarecrow had attempted to blackmail into helping him by capturing her and photographing her unmasked— Batman catches Scarecrow, but loses sight of Strange, with it being unclear whether Strange had actually survived the fall onto the weather vane, or if Scarecrow and Batman are hallucinating from exposure to Scarecrow's fear toxin.
Scarecrow appears in Batman: The Long Halloween, first seen escaping from Arkham on Mother's Day with help from Carmine Falcone, who also helps the Mad Hatter escape. The Scarecrow gases Batman with fear toxin as he escapes, causing Batman to flee to his parents' grave as Bruce Wayne, where he is arrested by Commissioner Jim Gordon due to Wayne's suspected ties to Falcone. Scarecrow robs a bank with the Mad Hatter on Independence Day for Falcone, but is stopped by Batman and Catwoman. He later appears in Falcone's office on Halloween with Batman's future rogue's gallery, but is defeated by Batman. Scarecrow returns in Batman: Dark Victory as part of Two-Face's gang, and is first seen putting fear gas in children's dolls on Christmas Eve. He is eventually defeated by Batman. He later appears as one of the villains present at Calendar Man's trial. It is revealed he and Calendar Man had been manipulating Falcone's son Alberto; Scarecrow had determined that Alberto feared his father, and poisoned his cigarettes with the fear toxin to bring out the fear; Calendar Man, meanwhile, had been talking to Alberto, with the fear toxin making Alberto hear his father's voice. Together, they manipulate Alberto into making an unsuccessful assassination attempt on his sister, Sofia Gigante. After Two-Face's hideout is attacked, Batman captures Scarecrow, who tells him where Two-Face is heading. In Catwoman: When in Rome, Scarecrow supplies the Riddler with fear gas to manipulate Catwoman, and later aids Riddler when he fights Catwoman in Rome. Scarecrow accidentally attacks Cheetah with his scythe before Catwoman knocks him out.
The Scarecrow appears in such story arcs as Knightfall and Shadow of the Bat, first teaming with the Joker to ransom off the mayor of Gotham City. Batman foils their plan and forces them to retreat. Scarecrow betrays Joker by spraying him with fear gas, but it has no effect; Joker then beats Scarecrow senseless with a chair. Scarecrow later tries to take over Gotham with an army of hypnotized college students, commanding them to spread his fear toxin all over the city. His lieutenant is the son of the first man he killed. He is confronted by both Batman-Azrael and Anarky and tries to escape by forcing his lieutenant to jump off of a building. Batman-Azrael knocks him out, and Anarky manages to save the boy. Despite his criminal history, he is still recognized as a skilled psychologist. When Aquaman needs insight into a serial killer operating in his new city of Sub Diego—San Diego having been sunk and the inhabitants turned into water-breathers by a secret organization—he consults with Scarecrow for insight into the pattern of the killer's crimes. Scarecrow determined that killer chose his victims by the initials of their first and last names to spell out the message 'I can't take it any more', allowing Aquaman to determine both the true identity and final target of the real killer.
In DC vs. Marvel, the Scarecrow temporarily allies with the Marvel Universe Scarecrow to capture Lois Lane before they are both defeated by Ben Reilly.
In the 2004 story arc As the Crow Flies, Scarecrow is hired by the Penguin under false pretenses. Dr. Linda Friitawa then secretly mutates Scarecrow into a murderous creature known as the "Scarebeast", who Penguin uses to kill off his disloyal minions. The character's later appearances all show him as an unmutated Crane again, except for an appearance during the War Games story arc. Scarecrow appears in the third issue of War Games saving Black Mask from Batman and acting as the crime lord's ally, until Black Mask uses him to disable a security measure in the Clock Tower by literally throwing Scarecrow at it. Scarecrow wakes up, transforms into Scarebeast, and wreaks havoc outside the building trying to find and kill Black Mask. The police are unable to take it down, and allow Catwoman, Robin, Tarantula II, and Onyx to fight Scarebeast, as Commissioner Michael Akins had told all officers to capture or kill any vigilantes, costumed criminals or "masks" they find. Even they cannot defeat the Scarebeast, though he appears to have been defeated after the Clock Tower explodes.
The Scarecrow reappears alongside other Batman villains in Gotham Underground; first among the villains meeting at the Iceberg Lounge to be captured by the Suicide Squad. Scarecrow escapes by gassing Bronze Tiger with fear toxin. He later appears warning the Ventriloquist II, Firefly, Killer Moth and Lock-Up, who are planning to attack the Penguin that Penguin is allied with the Suicide Squad. The villains wave off his warnings and mock him. He later leads the same four into a trap orchestrated by Tobias Whale. Killer Moth, Firefly and Lock-Up all survive, but are injured and unconscious to varying degrees, the Scarface puppet is "killed", and Peyton Reily, the new Ventriloquist, is unharmed, though after the attack she is taken away by Whale's men. Whale then betrays Scarecrow simply for touching his shoulder (it is revealed Whale has a pathological hatred of "masks" because his grandfather was one of the first citizens of Gotham killed by a masked criminal). The story arc ends with Whale beating Scarecrow up and leaving him bound and gagged, as a sign to all "masks" that they are not welcome in Whale's new vision of Gotham.
Scarecrow appears in Batman: Hush, working for the Riddler and Hush. He composes profiles on the various villains of Gotham so Riddler and Hush can manipulate them to their own ends. He later gases Huntress with his fear gas, making her attack Catwoman. He attacks Batman in a graveyard, only to learn his fear gas is ineffective (due to Hush's bug), but before he can reveal this he is knocked out by Jason Todd. Scarecrow also appears in Batman: Heart of Hush, kidnapping a child to distract Batman so Hush can attack Catwoman. When Batman goes to rescue the child, Scarecrow activates a Venom implant, causing the boy to attack Batman. He is defeated when Batman ties the boy's teddy bear to Scarecrow, causing the child to attack Scarecrow. After capturing Scarecrow, Batman forces him to reveal Hush's location. In the Battle for the Cowl storyline, Scarecrow is recruited by a new Black Mask to be a part of a group of villains who are aiming to take over Gotham in the wake of Batman's apparent death. He later assists the crime lord in manufacturing a recreational drug called "Thrill," which draws the attention of Oracle and Batgirl. He is later defeated by Batgirl and once again arrested.
Blackest Night
Scarecrow briefly appears in the fourth issue of the Blackest Night storyline. His immunity to fear (brought about by frequent exposure to his own fear toxin) renders him practically invisible to the invading Black Lanterns. The drug has taken a further toll on his sanity, exacerbated by Batman's disappearance in the Batman R.I.P. storyline; he develops a literal addiction to fear, exposing himself deliberately to the revenant army, but knowing that only Batman could scare him again. Using a duplicate of Sinestro's power ring, he is temporarily deputized into the Sinestro Corps to combat the Black Lanterns. Overjoyed at finally being able to feel fear again, Scarecrow gleefully and without question follows Sinestro's commands. His celebration is cut short when Lex Luthor, overwhelmed by the orange light of Avarice, steals his ring.
Brightest Day
In Brightest Day, Scarecrow begins kidnapping and murdering college interns working for LexCorp as a way of getting back at Lex Luthor for stealing his ring. When Robin and Supergirl attempt to stop him, Scarecrow unleashes a new fear toxin that is powerful enough to affect a Kryptonian. The toxin forces Supergirl to see visions of a Black Lantern Reactron, but she is able to snap out of the illusion and help Robin defeat Scarecrow. He is eventually freed from Arkham when Deathstroke and the Titans break into the asylum to capture one of the inmates.
The New 52
In 2011, The New 52 rebooted the DC universe. Scarecrow is a central villain in the Batman family of books and first appeared in the New 52 in Batman: The Dark Knight #4 (February 2012), written by David Finch and Paul Jenkins. His origin story is also altered; in this continuity, his father Gerald Crane used him as a test subject in his fear-based experiments. During one of these experiments, Crane's father locked him inside a little dark room, but suffered a fatal heart attack before he could let Jonathan out. Jonathan was trapped in the test chamber for days until being freed by some employers of the university. As a result of this event, he was irreparably traumatized and developed an obsession with fear. He became a psychologist, specializing in phobias. Eventually, Crane began using patients as test subjects for his fear toxin. His turn to criminality is also markedly different in this version; the New 52 Scarecrow is fired from his professorship for covering an arachnophobic student with spiders, and becomes a criminal after stabbing a patient to death.
The Scarecrow kidnaps Poison Ivy, and works with Bane to create and distribute to various Arkham inmates a new form of Venom infused with the Scarecrow's fear toxin. With the help of Superman and the Flash, Batman defeats the villains. The Scarecrow surfaces again in Batman: The Dark Knight #10, penned by Gregg Hurwitz, for a six-issue arc. The Scarecrow kidnaps Commissioner James Gordon and several children, and eventually releases his fear toxin into the atmosphere. Scarecrow is also used as a pawn by the Joker in the "Death of the Family" arc; he is referred to as Batman's physician. Scarecrow appears in Swamp Thing (vol. 5) #19 (June 2013), clipping flowers for his toxins at the Metropolis Botanical Garden. Swamp Thing attempts to save Scarecrow from cutting a poisonous flower, not realizing who the villain is. Scarecrow attempts to use his fear toxin on Swamp Thing. The toxin causes Swamp Thing to lose control of his powers until Superman intervenes. He is later approached by the Outsider of the Secret Society of Super Villains to join up with the group. Scarecrow accepts the offer.
As part of "Villains Month", Detective Comics (vol. 2) #23.3 (Sept. 2013) was titled The Scarecrow #1. Scarecrow goes to see Killer Croc, Mr. Freeze, Poison Ivy, and Riddler and informs them of a war at Blackgate Penitentiary is coming and learns where each of the alliances lives. Through his conversations with each, Scarecrow learns that Bane may be the cause of the Blackgate uprising and will be their leader in the impending war. It was also stated that Talons from the Court of Owls were stored at Blackgate on ice. Later, looking over the divided city, Scarecrow claims that once the war is over and the last obstacle has fallen, Gotham City would be his. Scarecrow approaches Professor Pyg at Gotham Memorial Hospital to see if he will give his supplies and Dollotrons to Scarecrow's followers. Scarecrow goes to Penguin next, who has already planned for the impending war, by blowing up the bridges giving access to Gotham City. Scarecrow and Man-Bat attempt to steal the frozen Talons from Blackgate while Penguin is having a meeting with Bane. Killer Croc rescues Scarecrow and Man-Bat from Blackgate and brings Scarecrow to Wayne Tower, where he gives Killer Croc control of Wayne Tower, as it no longer suits him. Scarecrow begins waking the Talons in his possession, having doused them with his fear gas and using Mad Hatter's mind-control technology in their helmets to control them. At Arkham Asylum, Scarecrow senses that he has lost the Talons after Bane freed them from Mad Hatter's mind-control technology. Scarecrow then turns to his next plan, giving the other inmates a small dose of Bane's Venom to temporarily transform them. Upon Bane declaring that Gotham City is finally his, he has Scarecrow hanged between two buildings.
In Batman and Robin Eternal, flashbacks reveal that Scarecrow was the first villain faced by Dick Grayson as Robin in the New 52 universe when his and Batman's investigations into Scarecrow's crimes lead Batman to Mother, a woman who believes that tragedy and trauma serve as 'positive' influences to help people become stronger. To this end, Mother has Scarecrow develop a new style of fear toxin that makes the brain suffer the same experience as witnessing a massive trauma, but Scarecrow turns against Mother as the victims of this plan would become incapable of feeling anything. Recognizing that Mother will kill him once he has outlived his usefulness, Scarecrow attempts to turn himself over to Batman, but Batman uses this opportunity to have him deliver a fake psychological profile of him to Mother, claiming that Batman is a scarred child terrified of losing the people he cares for to make Mother think she understands him. In the present day, as Mother unleashes a new hypnotic signal to take control of the world's children, the Bat-Family abduct Scarecrow to brew up a new batch of his trauma toxin after determining that it nullifies the controlling influence of Mother's signal until they can shut down her main base.
DC Rebirth
In DC Rebirth, Scarecrow works with the Haunter to release a low dose of fear toxin around Gotham on Christmas and sets up a small stand for her to pick up the toxin. Both he and Haunter are paralyzed by the toxin's effects, allowing Batman to apprehend them. The Scarecrow later emerges using a Sinestro Corps power ring to induce fear and rage against Batman in random citizens throughout Gotham, to the point where he provokes Alfred Pennyworth into threatening to shoot Simon Baz as part of his final assault. In Doomsday Clock, Scarecrow is among the villains who meet with the Riddler to discuss the Superman Theory. Wanting to take on villains outside his rogues gallery, Shazam flies to Gotham City where he hears about a hostage situation caused by Scarecrow. Shazam starts to fight him when he begins to get affected by the fear gas. Batman shows up and regains control of the situation by defeating Scarecrow and administering the antidote. As Scarecrow is arrested, Batman states to Shazam that Scarecrow is too dangerous for him to fight.
Infinite Frontier
During Infinite Frontier, a re-designed Crane is the main foe of the crossover Fear State.
Characterization
Skills and equipment
A master strategist and manipulator, his genius labels him as one of the most cunning criminal masterminds. Crane is a walking textbook on anxiety disorders and psychoactive drugs; he is able to recite the name and description of nearly every known phobia. He is even known to have a frightening ability to tamper with anyone's mind with just words, once managing to drive two men to suicide, and uses this insight to find people's mental pressure points and exploit them. Despite his scrawny build, Crane is a skilled martial artist who uses his long arms and legs in his personal combat style known as "violent dancing", developed during his training in the Kung Fu style of the White Crane, for which Scarecrow sometimes wields a sickle or scythe.
Scarecrow also has proficiency in both biochemistry and toxicology, both important to the invention of his fear toxin, which he atomized with mixed chemicals, including powerful synthetic adrenocortical secretions and other potent hallucinogens that can be inhaled or injected into the bloodstream to amplify the victim's darkest fear into a terrifying hallucination. Its potency has upgraded to an extreme level over the years; in some stories in which it appears, fear toxin is depicted as capable of prompting almost instantaneous, terror-induced heart attacks, leaving the victim in a permanent psychosis of chronic fear. Other versions of the toxin are powerful enough that even Superman can be affected; in one story, he mixes the toxin with kryptonite to simultaneously weaken and terrify the Man of Steel. To instill his toxin, he often uses a hand-held sprayer in the shape of a human skull and special straws which can be snapped in half to release it. In one story, Scarecrow concocts a chemical containing wildfowl pheromones from his childhood that causes nearby birds to attack his opponents.
Powers and abilities
In the story arc As the Crow Flies, after being secretly mutated by Dr. Linda Friitawa, Scarecrow gains the ability to turn into a large, monstrous creature called the Scarebeast. As Scarebeast, he has greatly enhanced strength, endurance, and emits a powerful fear toxin from his body. However, he has to be under physical strain or duress to transform. During the Blackest Night mini-series, Scarecrow is temporarily deputized into the Sinestro Corps by a duplicate of Sinestro's Power ring. He proves to be very capable in manipulating the light of fear to create constructs until his ring is stolen by Lex Luthor.
Personality
Crane, in almost all of his incarnations, is cruel, sadistic, deranged, and manipulative above all else. Crane is obsessed with fear, and takes sadistic pleasure in frightening his victims, often literally to death, with his fear toxin. Crane also suffers from brain damage from prolonged exposure to his own toxin that renders him nearly incapable of being afraid of anything - except Batman. This is problematic for him, as he is addicted to fear and compulsively seeks out confrontations with Batman to feed his addiction. He is also known to have a warped sense of humor, though not to the level of Black Mask or the Joker, as he has been known to frequently make taunts and quips related to his using his fear toxin or his love of terrifying others. During Alan Grant's "The God of Fear" storyline, Scarecrow develops a god complex; he creates an enormous hologram of himself that he projects against the sky, so he will be recognized and worshipped by the citizens of Gotham as a literal god of fear.
Other characters named Scarecrow
Madame Crow
Abigail O'Shay is a Gotham University student who writes her doctoral thesis on vigilantes like the Bat-Family, whom she calls the "cape and cowl crowd". She is fascinated by the kind of trauma a person would have to go through to fight criminals while in costume. She learns about such trauma first hand when Jonathan Crane, then uses her as the test subject in experiments using his fear toxin, intending to test its readiness for use on Batman. She spends more than a year in Arkham Asylum recuperating from Scarecrow's experiments. Blaming Batman for her trauma, O'Shay adopted the identity of Madame Crow with the intention of making sure no one would feel the kind of fear she did ever again as she becomes a member of the Victim Syndicate. In a reversal to Scarecrow's fear toxin, Madame Crow has a set of gauntlets that fire needles filled with "anti-fear" toxin, which removes fear in the hope of keeping people from fighting to avoid their own trauma.
Alternative versions
As one of Batman's most recognizable and popular opponents, the Scarecrow appears in numerous comics that are not considered part of the regular DC continuity, including:
The Scarecrow appears in Batman/Daredevil: King of New York, in which he attempts to use the Kingpin's criminal empire to disperse his fear gas over New York City. He is defeated when Daredevil, the "Man Without Fear", proves immune to the gas.
The Scarecrow is featured in part two of the four-part in JSA: The Liberty Files. This version of Scarecrow is portrayed as a German agent who kills a contact working for the Bat (Batman), the Clock (Hourman), and the Owl (Doctor Mid-Nite). In a struggle with Scarecrow, the fiancée of the agent Terry Sloane is killed. This causes Sloane to return to the field as Mister Terrific and kill Scarecrow.
A stand-in for Jonathan Crane named Jenna Clarke / Scarecrone appears in the Elseworlds original graphic novel Batman: Dark Knight Dynasty as a henchwoman/consort under the employ of Vandal Savage. Scarecrone also acts as a stand-in for Two-Face. She has the power to invade a person's psyche and make their deepest fears appear as illusions simply by touching them. "Scarecrone" is actually her alternate personality. Vandal Savage requires Clarke to switch to her Scarecrone persona through a special formula that he has made Clarke dependent on. The two personalities are antagonistic towards each other. It is revealed that when the formula brings out Scarecrone, the right side of her face becomes heavily scarred. This scarring is healed once the formula wears off and the Jenna Clarke personality becomes dominant again.
The Scarecrow is one of the main characters in Alex Ross' maxi-series Justice as part of the Legion of Doom. He is first seen out of costume in a hospital, injecting a girl in a wheelchair with a serum allowing her to walk. Scarecrow is later seen in costume during Lex Luthor's speech alongside Clayface inside the home of Black Canary and Green Arrow. Scarecrow gases Canary while Clayface attacks Green Arrow, but the attack fails when Black Canary finds her husband attacked by Clayface. Green Arrow defeats Clayface by electrocuting him with a lamp, and the duo flee soon after Canary unleashes her Canary Cry. Scarecrow is later seen with Clayface and Parasite, having captured Commissioner James Gordon, Batgirl, and Supergirl. When the Justice League storms the Hall of Doom, Scarecrow does not appear to face any particular target and duels the League as a whole. He is one of the few villains to escape the League's initial attack. The Justice League follows Scarecrow to his city, whereupon he sends his city's population to attack the League, knowing that they would not hurt civilians. However, John Stewart's ring frees the city from Scarecrow's control, subsequently freeing Scarecrow from Brainiac's control. Scarecrow does not seem bothered by this realization, admitting he would have done it anyway. He causes a diversion by releasing his fear gas into his entire city, driving his citizens into a homicidal frenzy, and manages to escape capture, but he is ambushed and nearly killed by the Joker in retaliation for not having been invited to the Legion of Doom. Scarecrow's city is again saved by the Justice League.
The Scarecrow appears in the third and final chapter of Batman & Dracula: Red Rain, in which he has adorned his Scarecrow costume with laces of the severed fingers of the bullies who tormented him in school. He is about to kill a former football player when vampire Batman appears, noting that Scarecrow is worse than him; as a vampire, he is driven to kill by forces beyond his control, while Scarecrow chooses to be a murderer. Batman then grabs Scarecrow's vial of fear gas, crushing it along with the supervillain's hand, and cuts Scarecrow's head off with his own sickle, declaring that Scarecrow has no idea what fear really is.
In the New 52 Batman Beyond books that takes place after Futures End, the future Batman/Terry McGinnis fights a new, female version of the Scarecrow named Adalyn Stern. As a child, Adalyn was traumatized when she witnessed Batman brutally beat up her father (who was a notorious gang leader). She was placed in institutional care until she was assigned to one of Jonathan Crane's disciples who attempted to treat her with technology derived from Crane's work, which only amplified her fear of Batman. She grows up and becomes a co-anchor to Jack Ryder on the New 52. She uses A.I. cubes placed in everyone's homes to brainwash the population into believing that the new Batman is a demon that needs to be put down. She is eventually defeated by the combined efforts of the original and new Batman as well as Jack Ryder and is institutionalized in Arkham Asylum afterward when she views herself as nothing but the Scarecrow.
In the alternate timeline of Flashpoint, Scarecrow is one of the many villains subsequently killed by Thomas Wayne, who is that universe's Batman.
In the graphic novel Batman: Earth One, Dr. Jonathan Crane is mentioned as the head of the Crane Institute for the Criminally Insane, and one of its escapees is one Ray Salinger, also known as the "Birthday Boy", used by Mayor Cobblepot to his advantages.
In Batman/Teenage Mutant Ninja Turtles crossover, the Scarecrow appears mutated into a raven as one of the various other Arkham inmates mutated by Shredder and the Foot Clan to attack Batman and Robin. Batman is captured, but Robin manages to escape. The Teenage Mutant Ninja Turtles and Splinter then arrive, where Splinter defeats the mutated villains, while Batman uses his new Intimidator Armor to defeat Shredder and the Turtles defeat Ra's al Ghul. Later, Jim Gordon tells Batman that the police scientists have managed to turn all of the inmates at Arkham back to normal, and that they are currently in A.R.G.U.S. custody.
Scarecrow makes a minor appearance in the 2017 series Batman: White Knight. Crane, along with several other Batman villains, is tricked by Jack Napier (who in this reality was a Joker who had been force-fed an overdose of pills by Batman which temporarily cured him of his insanity) into drinking drinks that had been laced with particles from Clayface's body. This was done so that Napier, who was using Mad Hatter's technology to control Clayface, could control them by way of Clayface's ability to control parts of his body that had been separated from him. Scarecrow and the other villains are then used to attack a library which Napier himself was instrumental in building in one of Gotham City's poorer districts. Later on in the story, the control hat is stolen by Neo-Joker (the second Harley Quinn, who felt that Jack Napier was a pathetic abnormality while Joker was the true, beautiful personality), in an effort to get Napier into releasing the Joker persona. Scarecrow also appears in the sequel storyline Batman: Curse of the White Knight, being among the villains murdered by Azrael.
The Scarecrow makes a cameo appearance in Arkham Asylum: A Serious House on Serious Earth.
Dr. Jonathan Crane/Scarecrow is one of the main antagonists in the Batman '89 series Echoes.
In other media
See also
List of Batman family enemies
References
External links
Scarecrow at DC CONTINUITY PROJECT
Scarecrow at DC Database
Scarecrow at Comic Vine
Action film villains
Batman characters
Characters created by Bill Finger
Characters created by Bob Kane
Comics characters introduced in 1941
DC Comics film characters
Fictional victims of child abuse
DC Comics male supervillains
DC Comics scientists
DC Comics television characters
Fictional bibliophiles
Fictional biochemists
Fictional inventors in comics
Fictional mad scientists
Fictional mass murderers
Fictional monsters
Fictional psychologists
Fictional scarecrows
Fictional terrorists
Fictional toxicologists
Film supervillains
Golden Age supervillains
Male film villains
Video game bosses
Villains in animated television series | Scarecrow (DC Comics) | [
"Chemistry"
] | 6,778 | [
"Fictional biochemists",
"Biochemists"
] |
312,623 | https://en.wikipedia.org/wiki/Schindler%27s%20Ark | Schindler's Ark is a historical fiction published in 1982 by the Australian novelist Thomas Keneally. It is based on the fictionalized story of the historical figure, Oskar Schindler. The United States edition of the book was titled Schindler's List; it was later reissued in Commonwealth countries under that name as well. The novel won the Booker Prize, a literary award conferred each year for the best single work of sustained fiction written in the English language, and was awarded the Los Angeles Times Book Prize for Fiction in 1983.
The book tells the story of Oskar Schindler, a member of the Nazi Party who becomes an unlikely hero by saving the lives of 1,200 Jews during the Holocaust. It follows actual people and events, with fictional dialogue and scenes added by the author where exact details are unknown. Keneally wrote a number of well-received novels before and after Schindler's Ark; however, in the wake of its highly successful 1993 film adaptation directed by director Steven Spielberg, it has since gone on to become his most well-known and celebrated work.
In 2022, the novel was included on the "Big Jubilee Read" list of 70 books by Commonwealth authors, selected to celebrate the Platinum Jubilee of Elizabeth II.
Background
Poldek Pfefferberg, a Holocaust survivor and Schindlerjude, inspired Keneally to write Schindler's Ark. After the war, Pfefferberg had tried on a number of occasions to interest the screenwriters and filmmakers he met through his business in making a film based on the story of Schindler and his efforts to save Polish Jews from the Nazis, as well as arranging several interviews with Schindler for American television.
Keneally's meetings with Pfefferberg and his research and interviews of Schindler's acquaintances are detailed in his 2007 book Searching for Schindler: A Memoir. In October 1980, Keneally went into Pfefferberg's shop in Beverly Hills to ask about the price of briefcases. Learning that Keneally was a novelist, Pfefferberg showed him his extensive files on Schindler, kept in two cabinets in his back room. After 50 minutes of entreaties, Pfefferberg was able to convince Keneally to write the book. Pfefferberg became an advisor, accompanying Keneally to Poland, where they visited Kraków and other sites associated with the Schindler story. Keneally dedicated Schindler's Ark to Pfefferberg: "who by zeal and persistence caused this book to be written."
After the publication of Schindler's Ark in 1982, Pfefferberg worked to persuade Steven Spielberg to film Keneally's book, using his acquaintance with Spielberg's mother to gain access.
A carbon copy of Schindler's original 13-page list, initially thought to be lost, was discovered in 2009 in a library in Sydney, Australia.
Plot summary
This novel tells the story of Oskar Schindler, self-made entrepreneur and bon viveur who finds himself saving Polish Jews from the Nazi death machine. Based on numerous eyewitness accounts, Keneally's story takes place within Hitler's attempts to make Europe judenfrei (free of Jews). Schindler is presented as a flawed hero – a drinker, a womaniser and, at first, a profiteer. After the war, he was commemorated as Righteous Among the Nations by the Yad Vashem Holocaust Museum in Jerusalem, but was never seen as a conventionally virtuous character. The story is not only Schindler's, it is the story of Kraków's Ghetto and the forced labour camp outside town, Płaszów, and of Amon Göth, Płaszów's commandant.
His wife Emilie Schindler later remarked in a German TV interview that Schindler did nothing remarkable before the war and nothing after it. "He was fortunate therefore that in the short fierce era between 1939 and 1945 he had met people who had summoned forth his deeper talents." After the war, his business ventures failed and he separated from his wife. He ended up living a sparse life in a small flat in Frankfurt. Eventually he arranged to live part of the year in Israel, supported by his Jewish friends, and part of the year in Frankfurt, where he was often hissed at in the streets as a traitor to his "race". After 29 unexceptional postwar years, he died in 1974. He was buried in Jerusalem, as he wished, with the help of his old friend Pfefferberg.
See also
Jurek Becker: Jacob the Liar (1969)
Louis Begley: Wartime Lies (1991)
References
External links
Thomas Keneally discusses Schindler's Ark on the BBC World Book Club
1982 Australian novels
Australian novels adapted into films
Booker Prize–winning works
Oskar Schindler
Biographical novels
Historical novels
Non-fiction novels
Novels about the Holocaust
Books about the Holocaust
Rescue of Jews during the Holocaust
Novels by Thomas Keneally
Novels set in Czechoslovakia
Novels set in Poland
Censored books
Hodder & Stoughton books | Schindler's Ark | [
"Biology"
] | 1,061 | [
"Rescue of Jews during the Holocaust",
"Behavior",
"Altruism"
] |
312,630 | https://en.wikipedia.org/wiki/WinNuke | In computer security, WinNuke is an example of a Nuke remote denial-of-service attack (DoS) that affected the Microsoft Windows 95, Microsoft Windows NT, Microsoft Windows 3.1x computer operating systems and Windows 7. The exploit sent a string of out-of-band data (OOB data) to the target computer on TCP port 139 (NetBIOS), causing it to lock up and display a Blue Screen of Death. This does not damage or change the data on the computer's hard disk, but any unsaved data would be lost.
Details
The so-called OOB simply means that the malicious TCP packet contained an Urgent pointer (URG). The "Urgent pointer" is a rarely used field in the TCP header, used to indicate that some of the data in the TCP stream should be processed quickly by the recipient. Affected operating systems did not handle the Urgent pointer field correctly.
A person under the screen-name "_eci" published C source code for the exploit on May 9, 1997. With the source code being widely used and distributed, Microsoft was forced to create security patches, which were released a few weeks later. For a time, numerous flavors of this exploit appeared going by such names as fedup, gimp, killme, killwin, knewkem, liquidnuke, mnuke, netnuke, muerte, nuke, nukeattack, nuker102, pnewq, project1, , simportnuke, sprite, sprite32, vconnect, vzmnuker, wingenocide, winnukeit, winnuker02, winnukev95, wnuke3269, wnuke4, and wnuke95.
A company called SemiSoft Solutions from New Zealand created a small program, called AntiNuke, that blocks WinNuke without having to install the official patch.
Years later, a second incarnation of WinNuke that uses another, similar exploit was found.
See also
Ping of death
References
Attacks against TCP
Denial-of-service attacks | WinNuke | [
"Technology"
] | 444 | [
"Denial-of-service attacks",
"Computer security exploits"
] |
312,648 | https://en.wikipedia.org/wiki/Mutual%20exclusivity | In logic and probability theory, two events (or propositions) are mutually exclusive or disjoint if they cannot both occur at the same time. A clear example is the set of outcomes of a single coin toss, which can result in either heads or tails, but not both.
In the coin-tossing example, both outcomes are, in theory, collectively exhaustive, which means that at least one of the outcomes must happen, so these two possibilities together exhaust all the possibilities. However, not all mutually exclusive events are collectively exhaustive. For example, the outcomes 1 and 4 of a single roll of a six-sided die are mutually exclusive (both cannot happen at the same time) but not collectively exhaustive (there are other possible outcomes; 2,3,5,6).
Logic
In logic, two propositions and are mutually exclusive if it is not logically possible for them to be true at the same time; that is, is a tautology. To say that more than two propositions are mutually exclusive, depending on the context, means either 1. " is a tautology" (it is not logically possible for more than one proposition to be true) or 2. " is a tautology" (it is not logically possible for all propositions to be true at the same time). The term pairwise mutually exclusive always means the former.
Probability
In probability theory, events E1, E2, ..., En are said to be mutually exclusive if the occurrence of any one of them implies the non-occurrence of the remaining n − 1 events. Therefore, two mutually exclusive events cannot both occur. Formally said, is a set of mutually exclusive events if and only if given any , if then . As a consequence, mutually exclusive events have the property: .
For example, in a standard 52-card deck with two colors it is impossible to draw a card that is both red and a club because clubs are always black. If just one card is drawn from the deck, either a red card (heart or diamond) or a black card (club or spade) will be drawn. When A and B are mutually exclusive, . To find the probability of drawing a red card or a club, for example, add together the probability of drawing a red card and the probability of drawing a club. In a standard 52-card deck, there are twenty-six red cards and thirteen clubs: 26/52 + 13/52 = 39/52 or 3/4.
One would have to draw at least two cards in order to draw both a red card and a club. The probability of doing so in two draws depends on whether the first card drawn was replaced before the second drawing since without replacement there is one fewer card after the first card was drawn. The probabilities of the individual events (red, and club) are multiplied rather than added. The probability of drawing a red and a club in two drawings without replacement is then , or 13/51. With replacement, the probability would be , or 13/52.
In probability theory, the word or allows for the possibility of both events happening. The probability of one or both events occurring is denoted P(A ∪ B) and in general, it equals P(A) + P(B) – P(A ∩ B). Therefore, in the case of drawing a red card or a king, drawing any of a red king, a red non-king, or a black king is considered a success. In a standard 52-card deck, there are twenty-six red cards and four kings, two of which are red, so the probability of drawing a red or a king is 26/52 + 4/52 – 2/52 = 28/52.
Events are collectively exhaustive if all the possibilities for outcomes are exhausted by those possible events, so at least one of those outcomes must occur. The probability that at least one of the events will occur is equal to one. For example, there are theoretically only two possibilities for flipping a coin. Flipping a head and flipping a tail are collectively exhaustive events, and there is a probability of one of flipping either a head or a tail. Events can be both mutually exclusive and collectively exhaustive. In the case of flipping a coin, flipping a head and flipping a tail are also mutually exclusive events. Both outcomes cannot occur for a single trial (i.e., when a coin is flipped only once). The probability of flipping a head and the probability of flipping a tail can be added to yield a probability of 1: 1/2 + 1/2 =1.
Statistics
In statistics and regression analysis, an independent variable that can take on only two possible values is called a dummy variable. For example, it may take on the value 0 if an observation is of a white subject or 1 if the observation is of a black subject. The two possible categories associated with the two possible values are mutually exclusive, so that no observation falls into more than one category, and the categories are exhaustive, so that every observation falls into some category. Sometimes there are three or more possible categories, which are pairwise mutually exclusive and are collectively exhaustive — for example, under 18 years of age, 18 to 64 years of age, and age 65 or above. In this case a set of dummy variables is constructed, each dummy variable having two mutually exclusive and jointly exhaustive categories — in this example, one dummy variable (called D1) would equal 1 if age is less than 18, and would equal 0 otherwise; a second dummy variable (called D2) would equal 1 if age is in the range 18–64, and 0 otherwise. In this set-up, the dummy variable pairs (D1, D2) can have the values (1,0) (under 18), (0,1) (between 18 and 64), or (0,0) (65 or older) (but not (1,1), which would nonsensically imply that an observed subject is both under 18 and between 18 and 64). Then the dummy variables can be included as independent (explanatory) variables in a regression. The number of dummy variables is always one less than the number of categories: with the two categories black and white there is a single dummy variable to distinguish them, while with the three age categories two dummy variables are needed to distinguish them.
Such qualitative data can also be used for dependent variables. For example, a researcher might want to predict whether someone gets arrested or not, using family income or race, as explanatory variables. Here the variable to be explained is a dummy variable that equals 0 if the observed subject does not get arrested and equals 1 if the subject does get arrested. In such a situation, ordinary least squares (the basic regression technique) is widely seen as inadequate; instead probit regression or logistic regression is used. Further, sometimes there are three or more categories for the dependent variable — for example, no charges, charges, and death sentences. In this case, the multinomial probit or multinomial logit technique is used.
See also
Contrariety
Dichotomy
Disjoint sets
Double bind
Event structure
Oxymoron
Synchronicity
MECE principle (mutually exclusive and collectively exhaustive)
Notes
References
Philosophy of mathematics
Logic
Abstraction
Dichotomies | Mutual exclusivity | [
"Mathematics"
] | 1,506 | [
"nan"
] |
312,671 | https://en.wikipedia.org/wiki/Mechanosynthesis | Mechanosynthesis is a term for hypothetical chemical syntheses in which reaction outcomes are determined by the use of mechanical constraints to direct reactive molecules to specific molecular sites. There are presently no non-biological chemical syntheses which achieve this aim. Some atomic placement has been achieved with scanning tunnelling microscopes.
Introduction
In conventional chemical synthesis or chemosynthesis, reactive molecules encounter one another through random thermal motion in a liquid or vapor. In a hypothesized process of mechanosynthesis, reactive molecules would be attached to molecular mechanical systems, and their encounters would result from mechanical motions bringing them together in planned sequences, positions, and orientations. It is envisioned that mechanosynthesis would avoid unwanted reactions by keeping potential reactants apart, and would strongly favor desired reactions by holding reactants together in optimal orientations for many molecular vibration cycles. In biology, the ribosome provides an example of a programmable mechanosynthetic device.
A non-biological form of mechanochemistry has been performed at cryogenic temperatures using scanning tunneling microscopes. So far, such devices provide the closest approach to fabrication tools for molecular engineering. Broader exploitation of mechanosynthesis awaits more advanced technology for constructing molecular machine systems, with ribosome-like systems as an attractive early objective.
Much of the excitement regarding advanced mechanosynthesis regards its potential use in assembly of molecular-scale devices. Such techniques appear to have many applications in medicine, aviation, resource extraction, manufacturing and warfare.
Most theoretical explorations of advanced machines of this kind have focused on using carbon, because of the many strong bonds it can form, the many types of chemistry these bonds permit, and utility of these bonds in medical and mechanical applications. Carbon forms diamond, for example, which if cheaply available, would be an excellent material for many machines.
It has been suggested, notably by K. Eric Drexler, that mechanosynthesis will be fundamental to molecular manufacturing based on nanofactories capable of building macroscopic objects with atomic precision. The potential for these has been disputed, notably by Nobel Laureate Richard Smalley (who proposed and then critiqued an unworkable approach based on "Smalley fingers").
The Nanofactory Collaboration, founded by Robert Freitas and Ralph Merkle in 2000, is a focused ongoing effort involving 23 researchers from 10 organizations and 4 countries that is developing a practical research agenda specifically aimed at positionally controlled diamond mechanosynthesis and diamondoid nanofactory development.
In practice, getting exactly one molecule to a known place on the microscope's tip is possible, but has proven difficult to automate. Since practical products require at least several hundred million atoms, this technique has not yet proven practical in forming a real product.
The goal of one line of mechanoassembly research focuses on overcoming these problems by calibration, and selection of appropriate synthesis reactions. Some suggest attempting to develop a specialized, very small (roughly 1,000 nanometers on a side) machine tool that can build copies of itself using mechanochemical means, under the control of an external computer. In the literature, such a tool is called an assembler or molecular assembler. Once assemblers exist, geometric growth (directing copies to make copies) could reduce the cost of assemblers rapidly. Control by an external computer should then permit large groups of assemblers to construct large, useful projects to atomic precision. One such project would combine molecular-level conveyor belts with permanently mounted assemblers to produce a factory.
In part to resolve this and related questions about the dangers of industrial accidents and popular fears of runaway events equivalent to Chernobyl and Bhopal disasters, and the more remote issue of ecophagy, grey goo and green goo (various potential disasters arising from runaway replicators, which could be built using mechanosynthesis) the UK Royal Society and UK Royal Academy of Engineering in 2003 commissioned a study to deal with these issues and larger social and ecological implications, led by mechanical engineering professor Ann Dowling. This was anticipated by some to take a strong position on these problems and potentials —– and suggest any development path to a general theory of so-called mechanosynthesis. However, the Royal Society's nanotech report did not address molecular manufacturing at all, except to dismiss it along with grey goo.
Current technical proposals for nanofactories do not include self-replicating nanorobots, and recent ethical guidelines would prohibit development of unconstrained self-replication capabilities in nanomachines.
Diamond mechanosynthesis
There is a growing body of peer-reviewed theoretical work on synthesizing diamond by mechanically removing/adding hydrogen atoms and depositing carbon atoms (a process known as diamond mechanosynthesis or DMS).
For example, the 2006 paper in this continuing research effort by Freitas, Merkle and their collaborators reports that the most-studied mechanosynthesis tooltip motif (DCB6Ge) successfully places a C2 carbon dimer on a C(110) diamond surface at both 300 K (room temperature) and 80 K (liquid nitrogen temperature), and that the silicon variant (DCB6Si) also works at 80 K but not at 300 K. These tooltips are intended to be used only in carefully controlled environments (e.g., vacuum). Maximum acceptable limits for tooltip translational and rotational misplacement errors are reported in paper III—tooltips must be positioned with great accuracy to avoid bonding the dimer incorrectly. Over 100,000 CPU hours were invested in this study.
The DCB6Ge tooltip motif, initially described at a Foresight Conference in 2002, was the first complete tooltip ever proposed for diamond mechanosynthesis and remains the only tooltip motif that has been successfully simulated for its intended function on a full 200-atom diamond surface. Although an early paper gives a predicted placement speed of 1 dimer per second for this tooltip, this limit was imposed by the slow speed of recharging the tool using an inefficient recharging method and is not based on any inherent limitation in the speed of use of a charged tooltip. Additionally, no sensing means was proposed for discriminating among the three possible outcomes of an attempted dimer placement—deposition at the correct location, deposition at the wrong location, and failure to place the dimer at all—because the initial proposal was to position the tooltip by dead reckoning, with the proper reaction assured by designing appropriate chemical energetics and relative bond strengths for the tooltip-surface interaction.
More recent theoretical work analyzes a complete set of nine molecular tools made from hydrogen, carbon and germanium able to (a) synthesize all tools in the set (b) recharge all tools in the set from appropriate feedstock molecules and (c) synthesize a wide range of stiff hydrocarbons (diamond, graphite, fullerenes, and the like). All required reactions are analyzed using standard ab initio quantum chemistry methods.
Further research to consider alternate tips will require time-consuming computational chemistry and difficult laboratory work.
In the early 2000s, a typical experimental arrangement was to attach a molecule to the tip of an atomic force microscope, and then use the microscope's precise positioning abilities to push the molecule on the tip into another on a substrate. Since the angles and distances can be precisely controlled, and the reaction occurs in a vacuum, novel chemical compounds and arrangements are possible.
History
The technique of moving single atoms mechanically was proposed by Eric Drexler in his 1986 book The Engines of Creation.
In 1989, researchers at IBM's Zürich Research Institute successfully spelled the letters "IBM" in xenon atoms on a cryogenic copper surface, grossly validating the approach. Since then, a number of research projects have undertaken to use similar techniques to store computer data in a compact fashion. More recently the technique has been used to explore novel physical chemistries, sometimes using lasers to excite the tips to particular energy states, or examine the quantum chemistry of particular chemical bonds.
In 1999, an experimentally proved methodology called feature-oriented scanning (FOS) was suggested. The feature-oriented scanning methodology allows precisely controlling the position of the probe of a scanning probe microscope (SPM) on an atomic surface at room temperature. The suggested methodology supports fully automatic control of single- and multiprobe instruments in solving tasks of mechanosynthesis and bottom-up nanofabrication.
In 2003, Oyabu et al. reported the first instance of purely mechanical-based covalent bond-making and bond-breaking, i.e., the first experimental demonstration of true mechanosynthesis—albeit with silicon rather than carbon atoms.
In 2005, the first patent application on diamond mechanosynthesis was filed.
In 2008, a $3.1 million grant was proposed to fund the development of a proof-of-principle mechanosynthesis system.
In 2013, IBM made A Boy and His Atom, a short animated film using atoms.
See also molecular nanotechnology, a more general explanation of the possible products, and discussion of other assembly techniques.
References
External links
Bibliography updated here by Robert Freitas
The Foresight Institute remains active.
2004 proposed practical method for enabling diamond mechanosynthesis, by Robert Freitas
Nanotechnology
Chemical synthesis | Mechanosynthesis | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,933 | [
"Nanotechnology",
"nan",
"Materials science",
"Chemical synthesis"
] |
312,678 | https://en.wikipedia.org/wiki/Footpath | A footpath (also pedestrian way, walking trail, nature trail) is a type of thoroughfare that is intended for use only by pedestrians and not other forms of traffic such as motorized vehicles, bicycles and horses. They can be found in a wide variety of places, from the centre of cities, to farmland, to mountain ridges. Urban footpaths are usually paved, may have steps, and can be called alleys, lanes, steps, etc.
National parks, nature preserves, conservation areas and other protected wilderness areas may have footpaths (trails) that are restricted to pedestrians. The term footpath can also describe a pavement/sidewalk in some English-speaking countries (such as Australia, New Zealand, and Ireland).
A footpath can also take the form of a footbridge, linking two places across a river.
Origins and history
Public footpaths are rights of way originally created by people walking across the land to work, market, the next village, church, and school. This includes mass paths and corpse roads. Some footpaths were also created by those undertaking a pilgrimage. Examples of the latter are the Pilgrim's Way in England and Pilgrim's Route (St. Olav's Way or the Old Kings' Road) in Norway. Some landowners allow access over their land without dedicating a right of way. These permissive paths are often indistinguishable from normal paths, but they are usually subject to restrictions. Such paths are often closed at least once a year, so that a permanent right of way cannot be established in law.
A mass path is a pedestrian track or road connecting destinations frequently used by rural communities, most usually the destination of Sunday Mass. They were most common during the centuries that preceded motorised transportation in Western Europe, and in particular the British Isles and the Netherlands (where such a path is called "kerkenpad" (lit. Church path). Mass paths typically included stretches crossing fields of neighboring farmers and were likely to contain stiles, when crossing fences or other boundaries, or plank footbridges to cross ditches. Some mass paths are still used today in the Republic of Ireland, but are usually subject to Ireland's complicated rights of way law.
Corpse roads provided a practical means for transporting corpses, often from remote communities, to cemeteries that had burial rights, such as parish churches and chapels of ease. In Great Britain, such routes can also be known by a number of other names: bier road, burial road, coffin road, coffin line, lyke or lych way, funeral road, procession way, corpse way, etc.
Nowadays footpaths are mainly used for recreation and have been frequently linked together, along with bridle paths and newly created footpaths, to create long-distance trails. Also, organizations have been formed in various countries to protect the right to use public footpaths, including the Ramblers Association and the Open Spaces Society in England. Footpaths are now also found in botanic gardens, arboretums, regional parks, conservation areas, wildlife gardens, and open-air museums. There are also educational trails, themed walks, sculpture trails and historic interpretive trails.
Rights of way
In England and Wales, public footpaths are rights of way on which pedestrians have a legally protected right to travel. Other public rights of way in England and Wales, such as bridleways, byways, towpaths, and green lanes are also used by pedestrians. In Scotland there is no legal distinction between a footpath and a bridleway and it is generally accepted that cyclists and horse riders may follow any right of way with a suitable surface. The law is different in both Northern Ireland and the Republic of Ireland and there are far fewer rights of way in Ireland as a whole (see Keep Ireland Open).
Definitive path maps
Footpaths and other rights of way in England and Wales are shown on definitive maps. A definitive map is a record of public rights of way in England and Wales. In law it is the definitive record of where a right of way is located. The highway authority (normally the county council, or unitary authority in areas with a one-tier system) has a statutory duty to maintain a definitive map, though in national parks the national park authority usually maintains the map. The Inner London boroughs are exempt from the statutory duty though they have the powers to maintain a map: currently none does so.
Currently, the number of footpaths in the UK totals 427,301 (around 81% of all rights of way) with a net combined route length of 105,125 miles.
In Scotland different legislation applies and there is no legally recognised record of rights of way. However, there is a National Catalogue of Rights of Way (CROW), compiled by the Scottish Rights of Way and Access Society (Scotways), in partnership with Scottish Natural Heritage, and the help of local authorities.
Open Spaces Society
The Open Spaces Society is a charitable British organisation that works to protect public rights of way and open spaces in the United Kingdom, such as common land and village greens. It is Britain's oldest national conservation body. The society was founded as the Commons Preservation Society and merged with the National Footpaths Society in 1899, and adopted their present name.
Much of the Open Spaces Society's work is concerned with the preservation and creation of public paths. Before the introduction of definitive maps of public paths in the early 1950s, the public did not know where paths were, and the Open Spaces Society helped the successful campaign for paths to be shown on Ordnance Survey maps. It advises the Department for Environment, Food and Rural Affairs and National Assembly for Wales on applications for works on common land. Local authorities are legally required to consult the society whenever there is a proposal to alter the route of a public right of way.
The Ramblers are another British organisation concerned with the protection of footpaths.
Urban footpaths
There are a variety of footpaths in urban settings, including paths along streams and rivers, through parks and across commons. Another type is the alley, normally providing access to the rear of properties or connecting built-up roads not easily reached by vehicles. Towpaths are another kind of urban footpath, but they are often shared with cyclists. A typical footpath in a park is found along the seawall in Stanley Park, Vancouver, British Columbia, Canada. This is a segregated path, with one lane for skaters and cyclists and the other for pedestrians.
In the US and Canada, where urban sprawl has begun to strike even the most rural communities, developers and local leaders are currently striving to make their communities more conducive to non-motorized transportation through the use of less traditional paths. The Robert Wood Johnson Foundation has established the Active Living by Design program to improve the livability of communities in part through developing trails, The Upper Valley Trails Alliance has done similar work on traditional trails, while the Somerville Community Path and related paths, are examples of urban initiatives. In St. John's, Newfoundland, Canada The Grand Concourse, is an integrated walkway system that has over of footpaths which link every major park, river, pond, and green space in six municipalities.
In London, England, there are several long-distance walking routes which combine footpaths and roads to link green spaces. These include the Capital Ring, London Outer Orbital Path and the Jubilee Walkway, the use of which have been endorsed by Transport for London.
Alley and steps
An alley is a narrow, usually paved, pedestrian path, often between the walls of buildings in towns and cities. This type is usually short and straight, and on steep ground can consist partially or entirely of steps. In older cities and towns in Europe, alleys are often what is left of a medieval street network, or a right of way or ancient footpath. Similar paths also exist in some older North American towns and cities. In some older urban development in North America lanes at the rear of houses, to allow for deliveries and garbage collection, are called alleys. Alleys may be paved, or unpaved, and a blind alley is a cul-de-sac. Some alleys are roofed because they are within buildings, such as the traboules of Lyon, or when they are a pedestrian passage through railway embankments in Britain. The latter follow the line of rights-of way that existed before the railway was built.
Because of topography, steps (stairs) are the predominant form of alley in hilly cities and towns. This includes Pittsburgh (see Steps of Pittsburgh), Cincinnati (see Steps of Cincinnati), Portland, Oregon, Seattle, and San Francisco in the United States, as well as Hong Kong, and Rome.
Long-distance paths
Footpaths (and other rights of way) have been combined, and new paths created, so as to produce long-distance walking routes in a number of countries. These can be rural in nature, such as the Essex Way, in southern England, which crosses farmland, or urban as with various routes in London, England, or along a coastline like the South West Coast Path in the West of England, or in the high mountains, like the Pacific Crest Trail in the US, which reaches at Forester Pass in the Sierra Nevada.
Maintenance
Many footpaths require some maintenance. Most rural paths have an earth or grass surface with stiles, and or gates, including kissing gates. A few will have stepping stones, fords, or bridges.
Urban footpaths may be constructed of masonry, brick, concrete, asphalt, cut stone or wood boardwalk. Crushed rock, decomposed granite, fine wood chips are also used. The construction materials can vary over the length of the footpath and may start with a well constructed hard surface in an urban area, and end with an inexpensive soft or loose surface in the countryside.
Stairs or steps are sometimes found in urban alleys, or cliff paths to beaches.
Issues
The main issues in urban areas include maintenance, litter, crime, and lighting after dark. In the countryside there are issues relating to conflicts between walkers and livestock, and these occasionally result in people being injured or even killed. Dogs often contribute to such conflicts – see in England and Wales The Dogs (Protection of Livestock) Act 1953. Also footpaths in remote locations can be difficult to maintain and a route along a country path can be impeded by ploughing, crops, overgrown vegetation, illegal barriers (including barbed wire), damaged stiles, etc.
Confrontation with landowners in the UK
There have been numerous problems over the years in England and Wales with landowners.
One notable example was with the millionaire property tycoon Nicholas Van Hoogstraten who had a long-standing dislike of and dispute with ramblers, describing them as "scum of the earth". In 1999 Hoogstraten erected a large fence across a footpath on his country estate in East Sussex. Local ramblers staged a protest against the erection of the fence outside the boundary of Van Hoogstraten's estate. On 10 February 2003 and after a 13-year battle and numerous legal proceedings, the path was finally re-opened.
Isle of Man
Another conflict involved Jeremy Clarkson, a TV presenter and Top Gear host who lives on the Isle of Man. He became frustrated at the lack of privacy at his home when ramblers deviated from a pathway to take photographs of his dwelling. Clarkson's property bordered a small 250-metre strip of land that had no definitive status as a public right of way but was used by walkers regardless. Clarkson aimed to close access to this small strip of his land, thereby forcing ramblers to take a small diversion to stick to the official public right of way and therefore protecting his claimed right to privacy on his own property. In May 2010 the former transport minister, Hon. David Anderson MHK, accepted the conclusions of a public inquiry that all except five of the paths claimed at the inquiry as public rights of way have been dedicated as public rights of way and should be added to the definitive map.
See also
Ancient trackway
Desire path
Drovers road
Footpaths of Gibraltar
Ginnel
Hiking
Pedestrian village
Pedestrian zone
Rail trail
Sunken road
Walkability
References
External links
Footpath Map — a map of footpaths in the UK
Garden features
Hiking
Trails
Urban planning
Walking | Footpath | [
"Engineering"
] | 2,511 | [
"Urban planning",
"Architecture"
] |
312,757 | https://en.wikipedia.org/wiki/Tron | Tron (stylized as TRON) is a 1982 American science fiction action adventure film written and directed by Steven Lisberger from a story by Lisberger and Bonnie MacBird. The film stars Jeff Bridges as Kevin Flynn, a computer programmer and video game developer who is transported inside the software world of a mainframe computer where he interacts with programs in his attempt to escape. It also stars Bruce Boxleitner, David Warner, Cindy Morgan, and Barnard Hughes. Tron, along with The Last Starfighter, was one of cinema's earliest films to use extensive computer-generated imagery (CGI).
The inspiration for Tron dates back to 1976, when Lisberger became intrigued with video games after seeing Pong. He and producer Donald Kushner set up an animation studio to develop Tron with the intention of making it an animated film. To promote the studio itself, Lisberger and his team created a 30-second animation featuring the first appearance of the title character. Eventually, Lisberger decided to include live-action elements with both backlit and computer animation for the actual feature-length film. Various studios had rejected the storyboards for the film before Walt Disney Productions agreed to finance and distribute Tron. There, backlit animation was finally combined with the 2D computer animation and the live action.
Tron was released on July 9, 1982. The film was a moderate success at the box office, and received positive reviews from critics, who praised its groundbreaking visuals and acting but criticized its storyline as being incoherent. Tron received nominations for Best Costume Design and Best Sound at the 55th Academy Awards. It was however disqualified from the Best Visual Effects category because at the time the Academy felt that using computer animation was "cheating". Tron spawned multiple video games (including an arcade tie-in released shortly after the film), and as it became a cult film, a multimedia franchise including comic books. A sequel titled Tron: Legacy, directed by Joseph Kosinski, was released in 2010, with Bridges and Boxleitner reprising their roles and Lisberger acting as producer. A commercial success, it was followed by the XD animated series Tron: Uprising in 2012, set between the two films. A third installment, Tron: Ares, is scheduled to be released on October 10, 2025.
Plot
Kevin Flynn is a leading software engineer, formerly employed by large technology corporation ENCOM. He now runs a video game arcade, and attempts to hack into ENCOM's system with a program called CLU. However, ENCOM's Master Control Program (MCP) halts his progress and CLU is deleted. Within ENCOM, programmer Alan Bradley and his girlfriend, engineer Lora Baines, discover that the MCP has closed off their access to projects. When Alan confronts the senior executive vice president, Ed Dillinger, he asserts the security measures are an effort to stop outside hacking attempts. However, when Dillinger privately questions the MCP through his computerized desk, he realizes the MCP has expanded into a powerful virtual intelligence and has been illegally appropriating personal, business, and government programs to increase its own capabilities. As Dillinger rose to the top of ENCOM by presenting Flynn's games as his own, the MCP blackmails Dillinger by threatening to expose his plagiarism if he does not comply with its directives.
Lora deduces that Flynn is the hacker, and she and Alan go to his arcade to warn him. Flynn reveals that he has been trying to locate evidence proving Dillinger's plagiarism. Together, the three form a plan to break into ENCOM and unlock Alan's "Tron" program, a self-governing security measure designed to protect the system and counter the functions of the MCP. Once inside ENCOM, the three split up, and Flynn comes into direct conflict with the MCP through a laboratory terminal. Before Flynn can get the information he needs, the MCP uses an experimental laser to digitize and upload him into the ENCOM gaming grid. There, computer programs are living entities appearing in the likeness of the human "Users" (programmers) who created them. The space is ruled by the MCP and its second-in-command, Sark, who coerce programs to renounce their belief in the Users and force those who resist to compete in deadly games.
Flynn is put into the games and plays well; between matches, he befriends two other captured programs, Ram and Tron. The three escape into the system during a round of Light Cycle (an arcade game Flynn created and is skilled at), but Flynn and Ram become separated from Tron by an MCP pursuit party. While attempting to help a badly injured Ram, Flynn learns that he can manipulate portions of the system by accessing his programmer knowledge. Just before Ram "derezzes" (dies), he recognizes Flynn as a User, and encourages him to find Tron and free the system. Using his newfound ability, Flynn rebuilds a broken vehicle and disguises himself as one of Sark's soldiers.
Tron enlists help from Yori, a sympathetic program, and at an I/O tower receives information from Alan necessary to destroy the MCP. Flynn rejoins them, and the three board a hijacked solar sailer to reach the MCP's core. However, Sark's command ship destroys the sailer, capturing Flynn and Yori and presumably killing Tron. Sark leaves the command ship and orders its deresolution, but Flynn keeps it intact by manipulating the system again.
Sark reaches the MCP's core on a shuttle carrying captured programs deemed powerful or useful. While the MCP attempts to absorb these programs, Tron, who is still alive, confronts Sark and critically injures him, prompting the MCP to give Sark all its functions. Realizing that his ability to manipulate the system might give Tron an opening, Flynn leaps into the beam of the MCP, distracting it. Seeing a break in the MCP's shield, Tron attacks through the gap and destroys the MCP and Sark, ending the MCP's control over the system and allowing the captured programs to communicate with users again.
Flynn reappears in the real world, rematerialized at the terminal. Tron's victory in the system has released all lockouts on computer access, and a nearby printer proves that Dillinger had plagiarized Flynn's creations. The next morning, Dillinger enters his office to find the MCP deactivated and the proof of his theft publicized. Flynn is subsequently promoted to CEO of ENCOM and is happily greeted by Alan and Lora as their new boss.
Cast
Jeff Bridges as Kevin Flynn, a former ENCOM programmer and video game developer who runs an arcade following his termination from the company. He is beamed into the mainframe via a digitizing laser by the Master Control Program.
Bridges also portrays Clu (Codified Likeness Utility), a hacking program developed by Flynn to find evidence of Dillinger's theft in the mainframe.
Bruce Boxleitner as Alan Bradley, Flynn's work partner and fellow ENCOM programmer.
Boxleitner also portrays Tron, a security program developed by Alan to self-monitor communications between the MCP and the real world.
David Warner as Ed Dillinger, the senior executive vice president of ENCOM. He was once a coworker of Flynn who used the Master Control Program to steal the latter's work and pass it off as his own, earning himself a series of undeserved promotions.
Warner also portrays Sark, a command program developed by Dillinger to serve as the MCP's second-in-command.
Warner additionally provided the uncredited voice of the Master Control Program (MCP), a rogue artificial intelligence operating system that originated as a chess program created by Dr. Walter Gibbs but annexed by Dillinger for his own use. The MCP monitors and controls ENCOM's mainframe.
Cindy Morgan as Dr. Lora Baines, Alan's coworker and girlfriend. She and Gibbs collaborate on ENCOM's digitization experiment.
Morgan also portrays Yori, an input/output program developed by Lora and an ally of Tron.
Barnard Hughes as Dr. Walter Gibbs, a co-founder of ENCOM who runs the company's science division. He creates the SHV 20905 digitizing laser with Lora's assistance.
Hughes also plays Dumont, a guardian program developed by Gibbs to protect input/output junctions in the mainframe.
Dan Shor as Ram, an actuarial program who is a close ally of Tron and Flynn.
Shor also briefly appears as an ENCOM programmer credited as "Popcorn Co-Worker".
Peter Jurasik as Crom, a compound interest program matched against Flynn on the Game Grid.
Tony Stephano as Peter, Dillinger's assistant. Stephano additionally played Sark's Lieutenant.
Production
Origins
The inspiration for Tron occurred in 1976 when Steven Lisberger, then an animator of drawings with his own studio, looked at a sample reel from a computer firm called MAGI and saw Pong for the first time. He was immediately fascinated by video games and wanted to do a film incorporating them. According to Lisberger, "I realized that there were these techniques that would be very suitable for bringing video games and computer visuals to the screen. And that was the moment that the whole concept flashed across my mind". The film's concept of entering a parallel game world was also inspired by the classic tale Alice in Wonderland.
Lisberger had already created an early version of the character 'Tron' for a 30 second long animation which was used to promote both Lisberger Studios and a series of various rock radio stations. This backlit cel animation depicted Tron as a character who glowed yellow; the same shade that Lisberger had originally intended for all the heroic characters developed for the feature-length Tron. This was later changed to blue for the finished film (see Pre-production below). The prototype Tron was bearded and resembled the Cylon Centurions from the 1978 TV series Battlestar Galactica. Also, Tron was armed with two "exploding discs", as Lisberger described them on the 2-Disc DVD edition (see Rinzler).
Lisberger elaborates: "Everybody was doing backlit animation in the 70s, you know. It was that disco look. And we thought, what if we had this character that was a neon line, and that was our Tron warrior – Tron for electronic. And what happened was, I saw Pong, and I said, well, that's the arena for him. And at the same time I was interested in the early phases of computer generated animation, which I got into at MIT in Boston, and when I got into that I met a bunch of programmers who were into all that. And they really inspired me, by how much they believed in this new realm."
He was frustrated by the clique-like nature of computers and video games and wanted to create a film that would open this world up to everyone. Lisberger and his business partner Donald Kushner moved to the West Coast in 1977 and set up an animation studio to develop Tron. They borrowed against the anticipated profits of their 90-minute animated television special Animalympics to develop storyboards for Tron with the notion of making an animated film. But after Variety mentioned the project briefly during its early phase, it caught the attention of computer scientist Alan Kay. He contacted Lisberger and convinced him to use him as an adviser on the movie, then persuaded him to use real CGI instead of just hand-animation.
Bonnie MacBird wrote the first drafts of Tron with extensive input from Lisberger, basing the original personality of Alan on Alan Kay. As a result of working together, Kay and MacBird became close and later married. She also created Tron as a character (rather than a visual demo) and Flynn. Originally, MacBird envisioned Flynn more comedically, suggesting the then 30-year-old Robin Williams for the role. Besides many story changes after the script went to Disney, including giving it "a more serious tone with quasi religious overtones", and removing most of the scientific elements, none of her dialogue remains in the final film, and there was a "rather bitter credits dispute."
The film was eventually conceived as an animated film bracketed with live-action sequences. The rest involved a combination of computer-generated visuals and back-lit animation. Lisberger planned to finance the movie independently by approaching several computer companies but had little success. However, one company, Information International Inc., was receptive. He met with Richard Taylor, a representative, and they began talking about using live-action photography with back-lit animation in such a way that it could be integrated with computer graphics. At this point, there was a script and the film was entirely storyboarded, with some computer animation tests completed. He had spent approximately $300,000 developing Tron and had also secured $4–5 million in private backing before reaching a standstill. Lisberger and Kushner took their storyboards and samples of computer-generated films to Warner Bros., Metro-Goldwyn-Mayer, and Columbia Pictures – all of which turned them down.
In 1980, they decided to take the idea to Walt Disney Productions, which was interested in producing more daring productions at the time. Tom Wilhite, Disney's vice president for creative development, watched Lisberger's test footage and convinced Ron Miller to give the movie a chance. However, Disney executives were uncertain about giving $10–12 million to a first-time producer and director using techniques which, in most cases, had never been attempted. The studio agreed to finance a test reel which involved a flying disc champion throwing a rough prototype of the discs used in the film. It was a chance to mix live-action footage with back-lit animation and computer-generated visuals. It impressed the executives at Disney and they agreed to back the film. MacBird and Lisberger's script was subsequently re-written and re-storyboarded with the studio's input. At the time, Disney rarely hired outsiders to make films for them, and Kushner found that he and his group were given a chilly reception because they "tackled the nerve center – the animation department. They saw us as the germ from outside. We tried to enlist several Disney animators, but none came. Disney is a closed group." As a result, they hired Wang Film Productions for the animation.
Production
Because of the many special effects, Disney decided in 1981 to film Tron completely in 65-mm Super Panavision (except for the computer-generated layers, which were shot in VistaVision; and both anamorphic 35mm and Super 35, which were used for some scenes in the "real" world, and subsequently "blown up" to 65 mm). Three designers were brought in to create the look of the computer world. French comic book artist Jean Giraud (also known as Moebius) was the main set and costume designer for the film. Most of the vehicle designs (including Sark's aircraft carrier, the light cycles, the tank, and the solar sailer) were created by industrial designer Syd Mead. Peter Lloyd, a high-tech commercial artist, designed the environments. Nevertheless, these jobs often overlapped, leaving Giraud working on the solar sailer and Mead designing terrain, sets and the film's logo. The original 'Program' character design was inspired by Lisberger Studios' logo of a glowing bodybuilder hurling two discs.
To create the computer animation sequences of Tron, Disney turned to the four leading computer graphics firms of the day: Information International, Inc. of Culver City, California, who owned the Super Foonly F-1 (the fastest PDP-10 ever made and the only one of its kind); MAGI of Elmsford, New York; Robert Abel and Associates of California; and Digital Effects of New York City. Bill Kovacs worked on the film while working for Robert Abel before going on to found Wavefront Technologies. The work was not a collaboration, resulting in very different styles used by the firms.
Tron was one of the first films to make extensive use of any form of computer animation, and it is celebrated as a milestone in the industry, although only fifteen to twenty minutes of such animation were used (mostly scenes that show digital "terrain" or patterns, or include vehicles such as light-cycles, tanks and ships). Because the technology to combine computer animation and live action did not exist at the time, these sequences were interspersed with the filmed characters. One of the computers used had only 2 MB of memory and no more than 330 MB of storage. This put a limit on detail of background; and at a certain distance, they had a procedure of mixing in black to fade things out, a process called "depth cueing". The film's Computer Effects Supervisor Richard Taylor told them "When in doubt, black it out!", which became their motto. Originally the film was meant to use white backgrounds like in THX 1138 inside the Grid, but it would require such huge amounts of lights that it was decided to use black backgrounds instead.
The computers used at the time could not perform animation, so the frames had to be produced one by one. In some of the more complex sequences, like the Solar Sailer moving through metal canyons, each frame could take up to six hours to produce. There was no way to digitally print them on film, either; rather, a motion picture camera was placed in front of a computer screen to capture each individual frame.
Most of the scenes, backgrounds, and visual effects in the film were created using more traditional techniques and a unique process known as "backlit animation". In this process, live-action scenes inside the computer world were filmed in black-and-white on an entirely black set, placed in an enlarger for blow-ups and transferred to large format Kodalith high-contrast film. These negatives were then used to make Kodalith sheets with a reverse (positive) image. Clear cels were laid over each sheet and all portions of the figure except the areas that were exposed for the later camera passes were manually blacked out. Next the Kodalith sheets and cel overlays were placed over a light box while a VistaVision camera mounted above it made separate passes and different color filters. A typical shot normally required 12 passes, but some sequences, like the interior of the electronic tank, could need as many as 50 passes. About 300 matte paintings were made for the film, each photographed onto a large piece of Ektachrome film before colors were added by gelatin filters in a similar procedure as in the Kodaliths. The mattes, rotoscopic and CGI were then combined and composed together to give them a "technological" appearance. With multiple layers of high-contrast, large format positives and negatives, this process required truckloads of sheet film and a workload even greater than that of a conventional cel-animated feature. The Kodalith was specially produced as large sheets by Kodak for the film and came in numbered boxes so that each batch of the film could be used in order of manufacture for a consistent image. However, this was not understood by the filmmakers and, as a result, glowing outlines and circuit traces occasionally flicker as the film speed varied between batches. After the reason was discovered, this was no longer a problem as the batches were used in order and "zinger" sounds were used during the flickering parts to represent the computer world malfunctioning as Lisberger described it. Lisberger later had these flickers and sounds digitally corrected for the 2011 restored Blu-ray release as they were not included in his original vision of the film. Due to its difficulty and cost, this process of back-lit animation was not repeated for another feature film.
Sound design and creation for the film was assigned to Frank Serafine, who was responsible for the sound design on Star Trek: The Motion Picture in 1979. “There were over 750 units [separate tape segments] in the picture," said Serafine. He created all the sound effects in the movie exclusively by synthesizers and similar electronic devices.
At one point in the film, a small entity called "Bit" advises Flynn with only the words "yes" and "no" created by a Votrax speech synthesizer.
BYTE wrote: "Although this film is very much the personal expression of Steven Lisberger's vision, nevertheless [it] has certainly been a group effort". More than 569 people were involved in the post-production work, including 200 inkers and hand-painters, 85 of them from Taiwan's Cuckoo's Nest Studio. Unusual for an English-language production, in the end credits the Taiwanese personnel were listed with their names written in Chinese characters.
This film features parts of the Lawrence Livermore National Laboratory; the multi-story ENCOM laser bay was the target area for the SHIVA solid-state multi-beamed laser. Also, the stairway that Alan, Lora, and Flynn use to reach Alan's office is the stairway in Building 451 near the entrance to the main machine room. The cubicle scenes were shot in another room of the lab. At the time, Tron was the only film to have scenes filmed inside this lab.
The original script called for "good" programs to be colored yellow and "evil" programs (those loyal to Sark and the MCP) to be colored blue. Partway into production, this coloring scheme was changed to blue for good and red for evil, but some scenes were produced using the original coloring scheme: Clu, who drives a tank, has yellow circuit lines, and all of Sark's tank commanders are blue (but appear green in some presentations). Also, the light-cycle sequence shows the heroes driving yellow (Flynn), orange (Tron), and red (Ram) cycles, while Sark's troops drive blue cycles; similarly, Clu's tank is red, while tanks driven by crews loyal to Sark are blue.
Because of all the personal information about citizens which exist inside computer networks, such as social security number and driver's license, the idea was that each real world person has a digital counterpart inside the Grid based on information about them, which is why it was decided to use some of the same actors in both worlds.
Budgeting the production was difficult by reason of breaking new ground in response to additional challenges, including an impending Directors Guild of America strike and a fixed release date. Disney predicted at least $400 million in domestic sales of merchandise, including an arcade game by Bally Midway and three Mattel Intellivision home video games.
The producers also added Easter eggs: during the scene where Tron and Ram escape from the Light Cycle arena into the system, Pac-Man can be seen behind Sark (with the corresponding sounds from the Pac-Man arcade game being heard in the background), while a "Hidden Mickey" outline (located at time 01:12:29 on the re-release Blu-ray) can be seen below the solar sailer during the protagonists' journey. The film set also included the arcade games Space Invaders (1978), Asteroids (1979) and Pac-Man (1980).
Tron was originally meant to be released during the Christmas season of 1982, but when chairman of the Disney board Card Walker found out the release date of Don Bluth's film The Secret of NIMH was in early July, he rushed it into a summer release to be able to compete with Bluth, and it ended up competing with films like E.T. the Extra-Terrestrial, Star Trek II: The Wrath of Khan, Blade Runner and Poltergeist.
Music
The soundtrack for Tron was written by pioneer electronic musician Wendy Carlos, who is best known for her album Switched-On Bach and for the soundtracks to many films, including the Stanley Kubrick-directed films A Clockwork Orange and The Shining. The music, which was the first collaboration between Carlos and her partner Annemarie Franklin, featured a mix of an analog Moog synthesizer and Crumar's GDS digital synthesizer (complex additive and phase modulation synthesis), along with non-electronic pieces performed by the London Philharmonic Orchestra (hired at the insistence of Disney, which was concerned that Carlos might not be able to complete her score on time). Two additional musical tracks ("1990's Theme" and "Only Solutions") were provided by the American band Journey after British band Supertramp pulled out of the project. An album featuring dialogue, music and sound effects from the film was also released on LP by Disneyland Records in 1982.
Reception and legacy
Box office
Tron was released on July 9, 1982, in 1,091 theaters in the United States and Canada grossing USD $4 million on its opening weekend. It went on to gross $33 million in the United States and Canada and $17 million overseas, for a worldwide gross of approximately $50 million, which was Disney's highest-grossing live action film for 5 years.
In addition, the film had $70 million in wholesale merchandise sales.
Despite the gross and merchandise sales, it was seen as a financial disappointment, and the studio wrote off some of its $17 million budget.
Critical response
The film was well received by critics. Roger Ebert of the Chicago Sun-Times gave the film four out of four stars and described it as "a dazzling movie from Disney in which computers have been used to make themselves romantic and glamorous. Here's a technological sound-and-light show that is sensational and brainy, stylish and fun". However, near the end of his review, he noted (in a positive tone), "This is an almost wholly technological movie. Although it's populated by actors who are engaging (Bridges, Cindy Morgan) or sinister (Warner), it's not really a movie about human nature. Like Star Wars or The Empire Strikes Back but much more so, this movie is a machine to dazzle and delight us". Ebert closed his first annual Overlooked Film Festival with a showing of Tron. Gene Siskel of the Chicago Tribune also awarded four out of four stars, calling it "a trip, and a terrifically entertaining one at that...It's a dazzler that opens up our minds to our new tools, all in a traditional film narrative." Each gave the film two thumbs up. Tron was also featured in Siskel and Ebert's video pick of the week in 1993.
InfoWorld's Deborah Wise was impressed, writing that "it's hard to believe the characters acted out the scenes on a darkened soundstage... We see characters throwing illuminated Frisbees, driving 'lightcycles' on a video-game grid, playing a dangerous version of jai alai and zapping numerous fluorescent tanks in arcade-game-type mazes. It's exciting, it's fun, and it's just what video-game fans and anyone with a spirit of adventure will love—despite plot weaknesses."
On the other hand, Variety disliked the film and said in its review, "Tron is loaded with visual delights but falls way short of the mark in story and viewer involvement. Screenwriter-director Steven Lisberger has adequately marshalled a huge force of technicians to deliver the dazzle, but even kids (and specifically computer game geeks) will have a difficult time getting hooked on the situations". In her review for The New York Times, Janet Maslin criticized the film's visual effects: "They're loud, bright and empty, and they're all this movie has to offer". The Washington Post'''s Gary Arnold wrote, "Fascinating as they are as discrete sequences, the computer-animated episodes don't build dramatically. They remain a miscellaneous form of abstract spectacle". In his review for The Globe and Mail, Jay Scott wrote, "It's got momentum and it's got marvels, but it's without heart; it's a visionary technological achievement without vision".
Colin Greenland reviewed the home video release of Tron for Imagine magazine, and stated that "three plucky young programmers descend into the micro-world to battle the Master Control Program with a sacred frisbee. Loses much of its excitement on the little screen."
On review aggregation website Rotten Tomatoes, the film holds a 73% rating based on the reviews of 71 critics, with an average rating of 6.4/10. The website's consensus states: "Though perhaps not as strong dramatically as it is technologically, TRON is an original and visually stunning piece of science fiction that represents a landmark work in the history of computer animation." Metacritic gave the film a score of 58 based on 13 reviews, indicating "mixed or average reviews".
In the year it was released, the Academy of Motion Picture Arts and Sciences refused to nominate Tron for a special-effects Academy Award because, as director Steven Lisberger puts it, "The Academy thought we cheated by using computers". The film did, however, earn Oscar nominations in the categories of Best Costume Design (Elois Jenssen and Rosanna Norton) and Best Sound (Michael Minkler, Bob Minkler, Lee Minkler, and James LaRue).
Cultural effect
In 1997, Ken Perlin of the Mathematical Applications Group, Inc. won an Academy Award for Technical Achievement for his invention of Perlin noise for Tron.
The film, considered groundbreaking, has inspired several individuals in numerous ways. John Lasseter, head of Pixar and Disney's animation group, described how the film helped him see the potential of computer-generated imagery in the production of animated films, stating "without Tron, there would be no Toy Story."
The two members of the French house music group Daft Punk, who scored the sequel and also had a cameo appearance in it, have held a joint, lifelong fascination with the film. Also, in Gorillaz' music video for the song "Feel Good Inc.", Russel, the fictional drummer of the band, can be seen wearing an Encom hat.Tron developed into a cult film and was ranked as 13th in a 2010 list of the top 20 cult films published by The Boston Globe.
The film heavily inspired the music video for Danish pop/dance group Infernal's 2006 hit single "From Paris to Berlin". The music video for Australian rock band Regurgitator's 1997 song "Everyday Formula" was also heavily inspired by the film and recreates several scenes.
In 2008, the American Film Institute nominated this film for its Top 10 Science Fiction Films list.
Books
A novelization of Tron was released in 1982, written by American science fiction novelist Brian Daley. It included eight pages of color photographs from the movie. In the same year, Disney Senior Staff Publicist Michael Bonifer authored a book entitled The Art of Tron which covered aspects of the pre-production and post-production aspects of Tron. A nonfiction book about the making of the original film, The Making of Tron: How Tron Changed Visual Effects and Disney Forever, was written by William Kallay and published in 2011.
Television Tron made its television debut as part of the Disney Channel's first day of programming, on April 18, 1983, at 7:00PM (ET).
Home media Tron was originally released on VHS, Betamax, LaserDisc, and CED Videodisc on December 1, 1982. As with most video releases from the 1980s, the film was cropped to the 4:3 pan and scan format. The film saw multiple re-releases throughout the 1990s, most notably an "Archive Collection" LaserDisc box set, which featured the first release of the film in its original widescreen 2.20:1 format. By 1993, Tron had grossed in video rentals.Tron saw its first DVD release on December 12, 2000. This bare-bones release utilized the same non-anamorphic video transfer used in the Archive Collection LaserDisc set, and it did not include any of the LD's special features. On January 15, 2002, the film received a 20th Anniversary Collector's Edition release in the forms of a VHS and a special 2-Disc DVD set. This set featured a new THX mastered anamorphic video transfer and included all of the special features from the LD Archive Collection release, plus an all-new 90 minute "Making of Tron" documentary.
To tie in with the home video release of Tron: Legacy, the movie was finally re-released by Walt Disney Studios Home Entertainment on Special Edition DVD and for the first time on Blu-ray Disc on April 5, 2011, with the subtitle "The Original Classic" to distinguish it from its sequel. Tron was also featured in a 5-Disc Blu-ray Combo with the 3D copy of Tron: Legacy. The film was re-released on Blu-ray and DVD in the UK on June 27, 2011.
Theme Parks
In Disneyland, the PeopleMover attraction was updated in 1982 to include Tron film projections in the SuperSpeed Tunnel section of the ride, which was announced as the Game Grid of Tron by the on-board audio guide. After this addition, the attraction was advertised as the PeopleMover Thru the World of Tron.
In 2016, Shanghai Disneyland opened Tron Lightcycle Power Run, a semi-enclosed, launched roller coaster based on the original film and its sequel. Walt Disney World opened a nearly identical version in 2023, called Tron Lightcycle / Run. Both are in the Tomorrowland themed areas at each park.
Sequels
Tron: Legacy
On January 12, 2005, Disney announced it had hired screenwriters Brian Klugman and Lee Sternthal to write a sequel to Tron. In 2008, director Joseph Kosinski negotiated to develop and direct TRON, described as "the next chapter" of the 1982 film and based on a preliminary teaser trailer shown at that year's San Diego Comic-Con, with Lisberger co-producing. Filming began in Vancouver, British Columbia in April 2009. During the 2009 Comic-Con, the title of the sequel was revealed to be changed to Tron: Legacy. The second trailer (also with the Tron: Legacy logo) was released in 3D with Alice In Wonderland. A third trailer premiered at Comic-Con 2010 on July 22. At Disney's D23 Expo on September 10–13, 2009, they also debuted teaser trailers for Tron: Legacy as well as having a light cycle and other props from the film there. The film was released on December 17, 2010, with Daft Punk composing the score.
Tron: Uprising (TV series) Tron: Uprising is a 2012 animated series set between the events of the first two films. In the series, young program Beck becomes the leader of a revolution inside the computer world of the Grid, tasked with the mission of freeing his home and friends from the reign of Clu and his henchman, General Tesler. To prepare for the challenge, Beck is mentored by Tron – the greatest warrior The Grid has ever known – as he grows beyond his youthful nature into a courageous and powerful leader. Destined to become the system's new protector, Beck adopts Tron's persona to battle the forces of evil.
Tron: Ares
In October 2010, a third film was announced to be in development, with Kosinski returning as director with a script co-written by Adam Horowitz and Edward Kitsis. The concept and ideas for a third film continued behind the scenes, from August 2016 to March 2017, when Jared Leto was announced to have signed on to co-star as a new character named Ares. In March 2022, Leto confirmed that the film was still in development. By January 2023, Garth Davis had exited as director, with Joachim Rønning entering negotiations to replace him; while production was planned to begin in Vancouver by August 2023. Initially scheduled to begin on August 14, 2023, principal photography was delayed due to the 2023 Hollywood labor disputes. In June 2023, Evan Peters was set to join the cast. Following the conclusion of the strikes in early November 2023, filming was reportedly set to begin early 2024. In late November 2023 however, it was announced that production on the project would officially begin following the holiday season of the same year. The film is set to be released on October 10, 2025.
Further reading
See also
Tron (hacker)
Demoscene
Isekai
Golden age of arcade video games
Automan, a 1983 ABC television series inspired by the film
Superhuman Samurai Syber-Squad Digimon Adventure Code Lyoko Zixx ReBoot''
References
External links
1982 films
1982 in computing
1980s science fiction action films
1980s science fiction adventure films
American films with live action and animation
American chase films
American science fiction action films
American science fiction adventure films
Films scored by Wendy Carlos
Films about artificial intelligence
Films about computer hacking
Films about computing
Films about video games
Films about virtual reality
Films adapted into comics
Films adapted into television shows
Films directed by Steven Lisberger
Films produced by Ron W. Miller
Films set in 1982
Films shot in Los Angeles
Religion in science fiction
Rotoscoped films
Tron films
Walt Disney Pictures films
1982 directorial debut films
Films produced by Donald Kushner
1980s English-language films
1980s American films
Films about death games
1982 science fiction films
English-language science fiction adventure films
English-language science fiction action films
Saturn Award–winning films | Tron | [
"Technology"
] | 7,812 | [
"Works about computing",
"Films about computing"
] |
312,833 | https://en.wikipedia.org/wiki/Dots%20per%20inch | Dots per inch (DPI, or dpi) is a measure of spatial printing, video or image scanner dot density, in particular the number of individual dots that can be placed in a line within the span of . Similarly, dots per centimetre (d/cm or dpcm) refers to the number of individual dots that can be placed within a line of .
DPI measurement in printing
DPI is used to describe the resolution number of dots per inch in a digital print and the printing resolution of a hard copy print dot gain, which is the increase in the size of the halftone dots during printing. This is caused by the spreading of ink on the surface of the media.
Up to a point, printers with higher DPI produce clearer and more detailed output. A printer does not necessarily have a single DPI measurement; it is dependent on print mode, which is usually influenced by driver settings. The range of DPI supported by a printer is most dependent on the print head technology it uses. A dot matrix printer, for example, applies ink via tiny rods striking an ink ribbon, and has a relatively low resolution, typically in the range of . An inkjet printer sprays ink through tiny nozzles, and is typically capable of 300–720 DPI. A laser printer applies toner through a controlled electrostatic charge, and may be in the range of 600 to 2,400 DPI.
The DPI measurement of a printer often needs to be considerably higher than the pixels per inch (PPI) measurement of a video display in order to produce similar-quality output. This is due to the limited range of colours for each dot typically available on a printer. At each dot position, the simplest type of color printer can either print no dot, or print a dot consisting of a fixed volume of ink in each of four color channels (typically CMYK with cyan, magenta, yellow and black ink) or 24 = 16 colours on laser, wax and most inkjet printers, of which only 14 or 15 (or as few as 8 or 9) may be actually discernible depending on the strength of the black component, the strategy used for overlaying and combining it with the other colours, and whether it is in "color" mode.
Higher-end inkjet printers can offer 5, 6 or 7 ink colours giving 32, 64 or 128 possible tones per dot location (and again, it can be that not all combinations will produce a unique result). Contrast this to a standard sRGB monitor where each pixel produces 256 intensities of light in each of three channels (RGB).
While some color printers can produce variable drop volumes at each dot position, and may use additional ink-color channels, the number of colours is still typically less than on a monitor. Most printers must therefore produce additional colours through a halftone or dithering process, and rely on their base resolution being high enough to "fool" the human observer's eye into perceiving a patch of a single smooth colour.
The exception to this rule is dye-sublimation printers, which can apply a much more variable amount of dye—close to or exceeding the number of the 256 levels per channel available on a typical monitor—to each "pixel" on the page without dithering, but with other limitations:
lower spatial resolution (typically 200 to 300 dpi), which can make text and lines look somewhat rough
lower output speed (a single page requiring three or four complete passes, one for each dye colour, each of which may take more than fifteen seconds—generally quicker, however, than most inkjet printers' "photo" modes)
a wasteful (and, for confidential documents, insecure) dye-film roll cartridge system
occasional color registration errors (mainly along the long axis of the page), which necessitate recalibrating the printer to account for slippage and drift in the paper feed system.
These disadvantages mean that, despite their marked superiority in producing good photographic and non-linear diagrammatic output, dye-sublimation printers remain niche products, and thus other devices using higher resolution, lower color depth, and dither patterns remain the norm.
This dithered printing process could require a region of four to six dots (measured across each side) to accurately reproduce the color in a single pixel. An image that is 100 pixels wide may need to be 400 to 600 dots in width in the printed output; if a 100 × 100-pixel image is to be printed in a one-inch square, the printer must be capable of 400 to 600 dots per inch to reproduce the image. As such, 600 dpi (sometimes 720) is now the typical output resolution of entry-level laser printers and some utility inkjet printers, with 1,200–1,440 and 2,400–2,880 being common "high" resolutions. This contrasts with the 300–360 (or 240) dpi of early models, and the approximate 200 dpi of dot-matrix printers and fax machines, which gave faxed and computer-printed documents—especially those that made heavy use of graphics or coloured block text—a characteristic "digitized" appearance, because of their coarse, obvious dither patterns, inaccurate colours, loss of clarity in photographs, and jagged ("aliased") edges on some text and line art.
DPI or PPI in digital image files
In printing, DPI (dots per inch) refers to the output resolution of a printer or imagesetter, and PPI (pixels per inch) refers to the input resolution of a photograph or image.
DPI refers to the physical dot density of an image when it is reproduced as a real physical entity, for example printed onto paper. A digitally stored image has no inherent physical dimensions, measured in inches or centimetres. Some digital file formats record a DPI value, or more commonly a PPI (pixels per inch) value, which is to be used when printing the image. This number lets the printer or software know the intended size of the image, or in the case of scanned images, the size of the original scanned object. For example, a bitmap image may measure 1,000 × 1,000 pixels, a resolution of 1 megapixel. If it is labelled as 250 PPI, that is an instruction to the printer to print it at a size of 4 × 4 inches. Changing the PPI to 100 in an image editing program would tell the printer to print it at a size of 10 × 10 inches. However, changing the PPI value would not change the size of the image in pixels which would still be 1,000 × 1,000. An image may also be resampled to change the number of pixels and therefore the size or resolution of the image, but this is quite different from simply setting a new PPI for the file.
For vector images, since the file is resolution independent, there is no need to resample the image before resizing it as it prints equally well at all sizes. However, there is still a target printing size. Some image formats, such as Photoshop format, can contain both bitmap and vector data in the same file. Adjusting the PPI in a Photoshop file will change the intended printing size of the bitmap portion of the data and also change the intended printing size of the vector data to match. This way the vector and bitmap data maintain a consistent size relationship when the target printing size is changed. Text stored as outline fonts in bitmap image formats is handled in the same way. Other formats, such as PDF, are primarily vector formats that can contain images, potentially at a mixture of resolutions. In these formats the target PPI of the bitmaps is adjusted to match when the target print size of the file is changed. This is the converse of how it works in a primarily bitmap format like Photoshop, but has exactly the same result of maintaining the relationship between the vector and bitmap portions of the data.
Computer monitor DPI standards
Since the 1980s, Macs have set the default display "DPI" to 72 PPI, while the Microsoft Windows operating system has used a default of 96 PPI. These default specifications arose out of the problems rendering standard fonts in the early display systems of the 1980s, including the IBM-based CGA, EGA, VGA and 8514 displays as well as the Macintosh displays featured in the 128K computer and its successors. The choice of 72 PPI by Macintosh for their displays arose from existing convention: the official 72 points per inch mirrored the 72 pixels per inch that appeared on their display screens. (Points are a physical unit of measure in typography, dating from the days of printing presses, where 1 point by the modern definition is of the international inch (25.4 mm), which therefore makes 1 point approximately 0.0139 in or 352.8 μm). Thus, the 72 pixels per inch seen on the display had exactly the same physical dimensions as the 72 points per inch later seen on a printout, with 1 pt in printed text equal to 1 px on the display screen. As it is, the Macintosh 128K featured a screen measuring 512 pixels in width by 342 pixels in height, and this corresponded to the width of standard office paper (512 px ÷ 72 px/in ≈ 7.1 in, with a 0.7 in margin down each side when assuming in × 11 in North American paper size; in the rest of the world, it is 210 mm × 297 mm – called A4. B5 is 176 mm × 250 mm).
A consequence of Apple's decision was that the widely used 10-point fonts from the typewriter era had to be allotted 10 display pixels in em height, and 5 display pixels in x-height. This is technically described as 10 pixels per em (PPEm). This made 10-point fonts be rendered crudely and made them difficult to read on the display screen, particularly the lowercase characters. Furthermore, there was the consideration that computer screens are typically viewed (at a desk) at a distance 30% greater than printed materials, causing a mismatch between the perceived sizes seen on the computer screen and those on the printouts.
Microsoft tried to solve both problems with a hack that has had long-term consequences for the understanding of what DPI and PPI mean. Microsoft began writing its software to treat the screen as though it provided a PPI characteristic that is of what the screen actually displayed. Because most screens at the time provided around 72 PPI, Microsoft essentially wrote its software to assume that every screen provides 96 PPI (because 72 × = 96). The short-term gain of this trickery was twofold:
It would seem to the software that one-third more pixels were available for rendering an image, thereby allowing for bitmap fonts to be created with greater detail.
On every screen that actually provided 72 PPI, each graphical element (such as a character of text) would be rendered at a size one third larger than it "should" be, thereby allowing a person to sit a comfortable distance from the screen. However, larger graphical elements meant less screen space was available for programs to draw. Indeed, the default 720-pixel wide mode of a Hercules mono graphics adaptor (the one-time gold standard for high resolution PC graphics) – or a "tweaked" VGA adaptor – provided an apparent -inch page width at this resolution. However, the more common and colour-capable display adaptors of the time all provided a 640-pixel wide image in their high resolution modes, enough for a bare inches at 100% zoom, with barely any greater visible page height – a maximum of 5 inches, versus . Consequently, the default margins in Microsoft Word were set, and still remain at 1 full inch on all sides of the page, keeping the "text width" for standard size printer paper within visible limits; despite most computer monitors now being both larger and finer-pitched, and printer paper transports having become more sophisticated, the Mac-standard half-inch borders remain listed in Word 2010's page layout presets as the "narrow" option (versus the 1-inch default).
Without using supplemental, software-provided zoom levels, the 1:1 relationship between display and print size was (deliberately) lost; the availability of different-sized, user-adjustable monitors and display adaptors with varying output resolutions exacerbated this, as it was not possible to rely on a properly-adjusted "standard" monitor and adaptor having a known PPI. For example, a 12-inch Hercules monitor and adaptor with a thick bezel and a little underscan may offer 90 "physical" PPI, with the displayed image appearing nearly identical to hardcopy (assuming the H-scan density was properly adjusted to give square pixels) but a thin-bezel 14-inch VGA monitor adjusted to give a borderless display may be closer to 60, with the same bitmap image thus appearing 50% larger; yet, someone with an 8514 ("XGA") adaptor and the same monitor could achieve 100 DPI using its 1024-pixel wide mode and adjusting the image to be underscanned. A user who wanted to directly compare on-screen elements against those on an existing printed page by holding it up against the monitor would therefore first need to determine the correct zoom level to use, largely by trial and error, and often not be able to obtain an exact match in programs that only allowed integer per cent settings, or even fixed pre-programmed zoom levels. For the examples above, they may need to use respectively 94% (precisely, 93.75) – or , 63% (62.5) – or ; and 104% (104.167) – or , with the more commonly accessible 110% actually being a less precise match.
Thus, for example, a 10-point font on a Macintosh (at 72 PPI) was represented with 10 pixels (i.e., 10 PPEm), whereas a 10-point font on a Windows platform (at 96 PPI) at the same zoom level is represented with 13 pixels (i.e., Microsoft rounded to 13 pixels, or 13 PPEm) – and, on a typical consumer grade monitor, would have physically appeared around to inch high instead of . Likewise, a 12-point font was represented with 12 pixels on a Macintosh, and 16 pixels (or a physical display height of maybe inch) on a Windows platform at the same zoom, and so on. The negative consequence of this standard is that with 96 PPI displays, there is no longer a one-to-one relationship between the font size in pixels and the printout size in points. This difference is accentuated on more recent displays that feature higher pixel densities. This has been less of a problem with the advent of vector graphics and fonts being used in place of bitmap graphics and fonts. Moreover, many Windows software programs have been written since the 1980s which assume that the screen provides 96 PPI. Accordingly, these programs do not display properly at common alternative resolutions such as 72 PPI or 120 PPI. The solution has been to introduce two concepts:
logical PPI: The PPI that software claims a screen provides. This can be thought of as the PPI provided by a virtual screen created by the operating system.
physical PPI: The PPI that a physical screen actually provides.
Software programs render images to the virtual screen and then the operating system renders the virtual screen onto the physical screen. With a logical PPI of 96 PPI, older programs can still run properly regardless of the actual physical PPI of the display screen, although they may exhibit some visual distortion thanks to the effective 133.3% pixel zoom level (requiring either that every third pixel be doubled in width/height, or heavy-handed smoothing be employed).
How Microsoft Windows handles DPI scaling
Displays with high pixel densities were not common up to the Windows XP era. High DPI displays became mainstream around the time Windows 8 was released. Display scaling by entering a custom DPI irrespective of the display resolution has been a feature of Microsoft Windows since Windows 95. Windows XP introduced the GDI+ library which allows resolution-independent text scaling. In Microsoft Windows, the DPI higher than 96 DPI is called High DPI.
Windows Vista introduced support for programs to declare themselves to the OS that they are high-DPI aware via a manifest file or using an API. For programs that do not declare themselves as DPI-aware, Windows Vista supports a compatibility feature called DPI virtualization so system metrics and UI elements are presented to applications as if they are running at 96 DPI and the Desktop Window Manager then scales the resulting application window to match the DPI setting. Windows Vista retains the Windows XP style scaling option which when enabled turns off DPI virtualization for all applications globally. DPI virtualization is a compatibility option as application developers are all expected to update their apps to support high DPI without relying on DPI virtualization.
Windows Vista also introduces Windows Presentation Foundation. WPF .NET applications are vector-based, not pixel-based and are designed to be resolution-independent. Developers using the old GDI API and Windows Forms on .NET Framework runtime need to update their apps to be DPI aware and flag their applications as DPI-aware.
Windows 7 adds the ability to change the DPI by doing only a log off, not a full reboot and makes it a per-user setting. Additionally, Windows 7 reads the pixel density related information from the EDID and automatically sets the system DPI value to match the monitor's physical pixel density, unless the effective resolution is less than 1024 × 768. Also, Windows 7 adds DirectWrite that optimised for monitors that larger than 1080p.
In Windows 8, only the DPI scaling percentage is shown in the DPI changing dialog and the display of the raw DPI value has been removed. In Windows 8.1, the global setting to disable DPI virtualization (only use XP-style scaling) is removed and a per-app setting added for the user to disable DPI virtualization from the Compatibility tab. When the DPI scaling setting is set to be higher than 120 PPI (125%), DPI virtualization is enabled for all applications unless the application opts out of it by specifying a DPI aware flag (manifest) as "true" inside the EXE. Windows 8.1 retains a per-application option to disable DPI virtualization of an app. Windows 8.1 also adds the ability for different displays to use independent DPI scaling factors, although it calculates this automatically for each display and turns on DPI virtualization for all monitors at any scaling level.
Windows 10 adds manual control over DPI scaling for individual monitors.
Proposed metrication
There are some ongoing efforts to abandon the DPI Image resolution unit in favour of a metric unit, giving the inter-dot spacing in dots per centimetre (px/cm or dpcm), as used in CSS3 media queries or micrometres (μm) between dots. A resolution of 72 DPI, for example, equals a resolution of about 28 dpcm or an inter-dot spacing of about 353 μm.
See also
Pixel density
Samples per inch – a related concept for image scanners
Lines per inch
Metric typographic units
Display resolution
Mouse DPI
Twip
Points Per Degree
References
External links
All About Digital Photos – The Myth of DPI
Monitor DPI detector
A Pixels to Inches Calculator based on DPI/PPI
Printing terminology
Units of density
Computer printing
Display technology | Dots per inch | [
"Physics",
"Mathematics",
"Engineering"
] | 4,071 | [
"Physical quantities",
"Units of density",
"Quantity",
"Electronic engineering",
"Density",
"Display technology",
"Units of measurement"
] |
312,853 | https://en.wikipedia.org/wiki/Totally%20real%20number%20field | In number theory, a number field F is called totally real if for each embedding of F into the complex numbers the image lies inside the real numbers. Equivalent conditions are that F is generated over Q by one root of an integer polynomial P, all of the roots of P being real; or that the tensor product algebra of F with the real field, over Q, is isomorphic to a tensor power of R.
For example, quadratic fields F of degree 2 over Q are either real (and then totally real), or complex, depending on whether the square root of a positive or negative number is adjoined to Q. In the case of cubic fields, a cubic integer polynomial P irreducible over Q will have at least one real root. If it has one real and two complex roots the corresponding cubic extension of Q defined by adjoining the real root will not be totally real, although it is a field of real numbers.
The totally real number fields play a significant special role in algebraic number theory. An abelian extension of Q is either totally real, or contains a totally real subfield over which it has degree two.
Any number field that is Galois over the rationals must be either totally real or totally imaginary.
See also
Totally imaginary number field
CM-field, a totally imaginary quadratic extension of a totally real field
References
Field (mathematics)
Algebraic number theory | Totally real number field | [
"Mathematics"
] | 281 | [
"Algebraic number theory",
"Number theory"
] |
312,867 | https://en.wikipedia.org/wiki/Local%20analysis | In algebraic geometry and related areas of mathematics, local analysis is the practice of looking at a problem relative to each prime number p first, and then later trying to integrate the information gained at each prime into a 'global' picture. These are forms of the localization approach.
Group theory
In group theory, local analysis was started by the Sylow theorems, which contain significant information about the structure of a finite group G for each prime number p dividing the order of G. This area of study was enormously developed in the quest for the classification of finite simple groups, starting with the Feit–Thompson theorem that groups of odd order are solvable.
Number theory
In number theory one may study a Diophantine equation, for example, modulo p for all primes p, looking for constraints on solutions. The next step is to look modulo prime powers, and then for solutions in the p-adic field. This kind of local analysis provides conditions for solution that are necessary. In cases where local analysis (plus the condition that there are real solutions) provides also sufficient conditions, one says that the Hasse principle holds: this is the best possible situation. It does for quadratic forms, but certainly not in general (for example for elliptic curves). The point of view that one would like to understand what extra conditions are needed has been very influential, for example for cubic forms.
Some form of local analysis underlies both the standard applications of the Hardy–Littlewood circle method in analytic number theory, and the use of adele rings, making this one of the unifying principles across number theory.
See also
:Category:Localization (mathematics)
Localization of a category
Localization of a module
Localization of a ring
Localization of a topological space
Hasse principle
References
Number theory
Finite groups
Localization (mathematics) | Local analysis | [
"Mathematics"
] | 371 | [
"Discrete mathematics",
"Mathematical structures",
"Finite groups",
"Algebraic structures",
"Number theory"
] |
312,877 | https://en.wikipedia.org/wiki/Jordan%20normal%20form | In linear algebra, a Jordan normal form, also known as a Jordan canonical form,
is an upper triangular matrix of a particular form called a Jordan matrix representing a linear operator on a finite-dimensional vector space with respect to some basis. Such a matrix has each non-zero off-diagonal entry equal to 1, immediately above the main diagonal (on the superdiagonal), and with identical diagonal entries to the left and below them.
Let V be a vector space over a field K. Then a basis with respect to which the matrix has the required form exists if and only if all eigenvalues of the matrix lie in K, or equivalently if the characteristic polynomial of the operator splits into linear factors over K. This condition is always satisfied if K is algebraically closed (for instance, if it is the field of complex numbers). The diagonal entries of the normal form are the eigenvalues (of the operator), and the number of times each eigenvalue occurs is called the algebraic multiplicity of the eigenvalue.
If the operator is originally given by a square matrix M, then its Jordan normal form is also called the Jordan normal form of M. Any square matrix has a Jordan normal form if the field of coefficients is extended to one containing all the eigenvalues of the matrix. In spite of its name, the normal form for a given M is not entirely unique, as it is a block diagonal matrix formed of Jordan blocks, the order of which is not fixed; it is conventional to group blocks for the same eigenvalue together, but no ordering is imposed among the eigenvalues, nor among the blocks for a given eigenvalue, although the latter could for instance be ordered by weakly decreasing size.
The Jordan–Chevalley decomposition is particularly simple with respect to a basis for which the operator takes its Jordan normal form. The diagonal form for diagonalizable matrices, for instance normal matrices, is a special case of the Jordan normal form.
The Jordan normal form is named after Camille Jordan, who first stated the Jordan decomposition theorem in 1870.
Overview
Notation
Some textbooks have the ones on the subdiagonal; that is, immediately below the main diagonal instead of on the superdiagonal. The eigenvalues are still on the main diagonal.
Motivation
An n × n matrix A is diagonalizable if and only if the sum of the dimensions of the eigenspaces is n. Or, equivalently, if and only if A has n linearly independent eigenvectors. Not all matrices are diagonalizable; matrices that are not diagonalizable are called defective matrices. Consider the following matrix:
Including multiplicity, the eigenvalues of A are λ = 1, 2, 4, 4. The dimension of the eigenspace corresponding to the eigenvalue 4 is 1 (and not 2), so A is not diagonalizable. However, there is an invertible matrix P such that J = P−1AP, where
The matrix is almost diagonal. This is the Jordan normal form of A. The section Example below fills in the details of the computation.
Complex matrices
In general, a square complex matrix A is similar to a block diagonal matrix
where each block Ji is a square matrix of the form
So there exists an invertible matrix P such that P−1AP = J is such that the only non-zero entries of J are on the diagonal and the superdiagonal. J is called the Jordan normal form of A. Each Ji is called a Jordan block of A. In a given Jordan block, every entry on the superdiagonal is 1.
Assuming this result, we can deduce the following properties:
Counting multiplicities, the eigenvalues of J, and therefore of A, are the diagonal entries.
Given an eigenvalue λi, its geometric multiplicity is the dimension of ker(A − λi I), where I is the identity matrix, and it is the number of Jordan blocks corresponding to λi.
The sum of the sizes of all Jordan blocks corresponding to an eigenvalue λi is its algebraic multiplicity.
A is diagonalizable if and only if, for every eigenvalue λ of A, its geometric and algebraic multiplicities coincide. In particular, the Jordan blocks in this case are 1 × 1 matrices; that is, scalars.
The Jordan block corresponding to λ is of the form λI + N, where N is a nilpotent matrix defined as Nij = δi,j−1 (where δ is the Kronecker delta). The nilpotency of N can be exploited when calculating f(A) where f is a complex analytic function. For example, in principle the Jordan form could give a closed-form expression for the exponential exp(A).
The number of Jordan blocks corresponding to λi of size at least j is dim ker(A − λiI)j − dim ker(A − λiI)j−1. Thus, the number of Jordan blocks of size j is
Given an eigenvalue λi, its multiplicity in the minimal polynomial is the size of its largest Jordan block.
Example
Consider the matrix from the example in the previous section. The Jordan normal form is obtained by some similarity transformation:
that is,
Let have column vectors , , then
We see that
For we have , that is, is an eigenvector of corresponding to the eigenvalue . For , multiplying both sides by gives
But , so
Thus,
Vectors such as are called generalized eigenvectors of A.
Example: Obtaining the normal form
This example shows how to calculate the Jordan normal form of a given matrix.
Consider the matrix
which is mentioned in the beginning of the article.
The characteristic polynomial of A is
This shows that the eigenvalues are 1, 2, 4 and 4, according to algebraic multiplicity. The eigenspace corresponding to the eigenvalue 1 can be found by solving the equation Av = λv. It is spanned by the column vector v = (−1, 1, 0, 0)T. Similarly, the eigenspace corresponding to the eigenvalue 2 is spanned by w = (1, −1, 0, 1)T. Finally, the eigenspace corresponding to the eigenvalue 4 is also one-dimensional (even though this is a double eigenvalue) and is spanned by x = (1, 0, −1, 1)T. So, the geometric multiplicity (that is, the dimension of the eigenspace of the given eigenvalue) of each of the three eigenvalues is one. Therefore, the two eigenvalues equal to 4 correspond to a single Jordan block, and the Jordan normal form of the matrix A is the direct sum
There are three Jordan chains. Two have length one: {v} and {w}, corresponding to the eigenvalues 1 and 2, respectively. There is one chain of length two corresponding to the eigenvalue 4. To find this chain, calculate
where I is the 4 × 4 identity matrix. Pick a vector in the above span that is not in the kernel of A − 4I; for example, y = (1,0,0,0)T. Now, (A − 4I)y = x and (A − 4I)x = 0, so {y, x} is a chain of length two corresponding to the eigenvalue 4.
The transition matrix P such that P−1AP = J is formed by putting these vectors next to each other as follows
A computation shows that the equation P−1AP = J indeed holds.
If we had interchanged the order in which the chain vectors appeared, that is, changing the order of v, w and {x, y} together, the Jordan blocks would be interchanged. However, the Jordan forms are equivalent Jordan forms.
Generalized eigenvectors
Given an eigenvalue λ, every corresponding Jordan block gives rise to a Jordan chain of linearly independent vectors pi, i = 1, ..., b, where b is the size of the Jordan block. The generator, or lead vector, pb of the chain is a generalized eigenvector such that (A − λI)bpb = 0. The vector p1 = (A − λI)b−1pb is an ordinary eigenvector corresponding to λ. In general, pi is a preimage of pi−1 under A − λI. So the lead vector generates the chain via multiplication by A − λI. Therefore, the statement that every square matrix A can be put in Jordan normal form is equivalent to the claim that the underlying vector space has a basis composed of Jordan chains.
A proof
We give a proof by induction that any complex-valued square matrix A may be put in Jordan normal form. Since the underlying vector space can be shown to be the direct sum of invariant subspaces associated with the eigenvalues, A can be assumed to have just one eigenvalue λ. The 1 × 1 case is trivial. Let A be an n × n matrix. The range of A − λI, denoted by Ran(A − λI), is an invariant subspace of A. Also, since λ is an eigenvalue of A, the dimension of Ran(A − λI), r, is strictly less than n, so, by the inductive hypothesis, Ran(A − λI) has a basis {p1, ..., pr} composed of Jordan chains.
Next consider the kernel, that is, the subspace ker(A−λI). If
the desired result follows immediately from the rank–nullity theorem. (This would be the case, for example, if A were Hermitian.)
Otherwise, if
let the dimension of Q be s ≤ r. Each vector in Q is an eigenvector, so Ran(A − λI) must contain s Jordan chains corresponding to s linearly independent eigenvectors. Therefore the basis {p1, ..., pr} must contain s vectors, say {p1, ..., ps}, that are lead vectors of these Jordan chains. We can "extend the chains" by taking the preimages of these lead vectors. (This is the key step.) Let qi be such that
Finally, we can pick any basis for
and then lift to vectors {z1, ..., zt} in ker(A−λI). Each zi forms a Jordan chain of length 1. We just need to show that the union of {p1, ..., pr}, {z1, ..., zt}, and {q1, ..., qs} forms a basis for the vector space.
By the rank-nullity theorem, dim(ker(A−λI))=n-r, so t=n-r-s, and so the number of vectors in the potential basis is equal to n. To show linear independence, suppose some linear combination of the vectors is 0. Applying A − λI, we get some linear combination of pi, with the qi becoming lead vectors among the pi. From linear indepence of pi, it follows that the coefficients of the vectors qi must be zero. Furthermore, no non-trivial linear combination of the zi can equal a linear combination of pi, because then it would belong to Ran(A − λ I) and thus Q, which is impossible by the construction of zi. Therefore the coefficients of the zi will also be 0. This leaves just pi terms, which are assumed to be linearly independent, and so these coefficients must be zero too. We have found a basis composed of Jordan chains, and this shows A can be put in Jordan normal form.
Uniqueness
It can be shown that the Jordan normal form of a given matrix A is unique up to the order of the Jordan blocks.
Knowing the algebraic and geometric multiplicities of the eigenvalues is not sufficient to determine the Jordan normal form of A. Assuming the algebraic multiplicity m(λ) of an eigenvalue λ is known, the structure of the Jordan form can be ascertained by analyzing the ranks of the powers (A − λI)m(λ). To see this, suppose an n × n matrix A has only one eigenvalue λ. So m(λ) = n. The smallest integer k1 such that
is the size of the largest Jordan block in the Jordan form of A. (This number k1 is also called the index of λ. See discussion in a following section.) The rank of
is the number of Jordan blocks of size k1. Similarly, the rank of
is twice the number of Jordan blocks of size k1 plus the number of Jordan blocks of size k1 − 1. The general case is similar.
This can be used to show the uniqueness of the Jordan form. Let J1 and J2 be two Jordan normal forms of A. Then J1 and J2 are similar and have the same spectrum, including algebraic multiplicities of the eigenvalues. The procedure outlined in the previous paragraph can be used to determine the structure of these matrices. Since the rank of a matrix is preserved by similarity transformation, there is a bijection between the Jordan blocks of J1 and J2. This proves the uniqueness part of the statement.
Real matrices
If A is a real matrix, its Jordan form can still be non-real. Instead of representing it with complex eigenvalues and ones on the superdiagonal, as discussed above, there exists a real invertible matrix P such that P−1AP = J is a real block diagonal matrix with each block being a real Jordan block. A real Jordan block is either identical to a complex Jordan block (if the corresponding eigenvalue is real), or is a block matrix itself, consisting of 2×2 blocks (for non-real eigenvalue with given algebraic multiplicity) of the form
and describe multiplication by in the complex plane. The superdiagonal blocks are 2×2 identity matrices and hence in this representation the matrix dimensions are larger than the complex Jordan form. The full real Jordan block is given by
This real Jordan form is a consequence of the complex Jordan form. For a real matrix the nonreal eigenvectors and generalized eigenvectors can always be chosen to form complex conjugate pairs. Taking the real and imaginary part (linear combination of the vector and its conjugate), the matrix has this form with respect to the new basis.
Matrices with entries in a field
Jordan reduction can be extended to any square matrix M whose entries lie in a field K. The result states that any M can be written as a sum D + N where D is semisimple, N is nilpotent, and DN = ND. This is called the Jordan–Chevalley decomposition. Whenever K contains the eigenvalues of M, in particular when K is algebraically closed, the normal form can be expressed explicitly as the direct sum of Jordan blocks.
Similar to the case when K is the complex numbers, knowing the dimensions of the kernels of (M − λI)k for 1 ≤ k ≤ m, where m is the algebraic multiplicity of the eigenvalue λ, allows one to determine the Jordan form of M. We may view the underlying vector space V as a K[x]-module by regarding the action of x on V as application of M and extending by K-linearity. Then the polynomials (x − λ)k are the elementary divisors of M, and the Jordan normal form is concerned with representing M in terms of blocks associated to the elementary divisors.
The proof of the Jordan normal form is usually carried out as an application to the ring K[x] of the structure theorem for finitely generated modules over a principal ideal domain, of which it is a corollary.
Consequences
One can see that the Jordan normal form is essentially a classification result for square matrices, and as such several important results from linear algebra can be viewed as its consequences.
Spectral mapping theorem
Using the Jordan normal form, direct calculation gives a spectral mapping theorem for the polynomial functional calculus: Let A be an n × n matrix with eigenvalues λ1, ..., λn, then for any polynomial p, p(A) has eigenvalues p(λ1), ..., p(λn).
Characteristic polynomial
The characteristic polynomial of is . Similar matrices have the same characteristic polynomial.
Therefore, ,
where is the ith root of and is its multiplicity, because this is clearly the characteristic polynomial of the Jordan form of A.
Cayley–Hamilton theorem
The Cayley–Hamilton theorem asserts that every matrix A satisfies its characteristic equation: if is the characteristic polynomial of , then . This can be shown via direct calculation in the Jordan form, since if is an eigenvalue of multiplicity ,
then its Jordan block clearly satisfies .
As the diagonal blocks do not affect each other, the ith diagonal block of is ; hence .
The Jordan form can be assumed to exist over a field extending the base field of the matrix, for instance over the splitting field of ; this field extension does not change the matrix in any way.
Minimal polynomial
The minimal polynomial P of a square matrix A is the unique monic polynomial of least degree, m, such that P(A) = 0. Alternatively, the set of polynomials that annihilate a given A form an ideal I in C[x], the principal ideal domain of polynomials with complex coefficients. The monic element that generates I is precisely P.
Let λ1, ..., λq be the distinct eigenvalues of A, and si be the size of the largest Jordan block corresponding to λi. It is clear from the Jordan normal form that the minimal polynomial of A has degree si.
While the Jordan normal form determines the minimal polynomial, the converse is not true. This leads to the notion of elementary divisors. The elementary divisors of a square matrix A are the characteristic polynomials of its Jordan blocks. The factors of the minimal polynomial m are the elementary divisors of the largest degree corresponding to distinct eigenvalues.
The degree of an elementary divisor is the size of the corresponding Jordan block, therefore the dimension of the corresponding invariant subspace. If all elementary divisors are linear, A is diagonalizable.
Invariant subspace decompositions
The Jordan form of a n × n matrix A is block diagonal, and therefore gives a decomposition of the n dimensional Euclidean space into invariant subspaces of A. Every Jordan block Ji corresponds to an invariant subspace Xi. Symbolically, we put
where each Xi is the span of the corresponding Jordan chain, and k is the number of Jordan chains.
One can also obtain a slightly different decomposition via the Jordan form. Given an eigenvalue λi, the size of its largest corresponding Jordan block si is called the index of λi and denoted by v(λi). (Therefore, the degree of the minimal polynomial is the sum of all indices.) Define a subspace Yi by
This gives the decomposition
where l is the number of distinct eigenvalues of A. Intuitively, we glob together the Jordan block invariant subspaces corresponding to the same eigenvalue. In the extreme case where A is a multiple of the identity matrix we have k = n and l = 1.
The projection onto Yi and along all the other Yj ( j ≠ i ) is called the spectral projection of A at vi and is usually denoted by P(λi ; A). Spectral projections are mutually orthogonal in the sense that P(λi ; A) P(vj ; A) = 0 if i ≠ j. Also they commute with A and their sum is the identity matrix. Replacing every vi in the Jordan matrix J by one and zeroing all other entries gives P(vi ; J), moreover if U J U−1 is the similarity transformation such that A = U J U−1 then P(λi ; A) = U P(λi ; J) U−1. They are not confined to finite dimensions. See below for their application to compact operators, and in holomorphic functional calculus for a more general discussion.
Comparing the two decompositions, notice that, in general, l ≤ k. When A is normal, the subspaces Xi's in the first decomposition are one-dimensional and mutually orthogonal. This is the spectral theorem for normal operators. The second decomposition generalizes more easily for general compact operators on Banach spaces.
It might be of interest here to note some properties of the index, ν(λ). More generally, for a complex number λ, its index can be defined as the least non-negative integer ν(λ) such that
So ν(v) > 0 if and only if λ is an eigenvalue of A. In the finite-dimensional case, ν(v) ≤ the algebraic multiplicity of v.
Plane (flat) normal form
The Jordan form is used to find a normal form of matrices up to conjugacy such that normal matrices make up an algebraic variety of a low fixed degree in the ambient matrix space.
Sets of representatives of matrix conjugacy classes for Jordan normal form or rational canonical forms in general do not constitute linear or
affine subspaces in the ambient matrix spaces.
Vladimir Arnold posed a problem:
Find a canonical form of matrices over a field for which the set of representatives of matrix conjugacy classes is a union of affine linear subspaces (flats). In other words, map the set of matrix conjugacy classes injectively back into the initial set of matrices so that the image of this embedding—the set of all normal matrices, has the lowest possible degree—it is a union of shifted linear subspaces.
It was solved for algebraically closed fields by Peteris Daugulis.
The construction of a uniquely defined plane normal form of a matrix starts by considering its Jordan normal form.
Matrix functions
Iteration of the Jordan chain motivates various extensions to more abstract settings. For finite matrices, one gets matrix functions; this can be extended to compact operators and the holomorphic functional calculus, as described further below.
The Jordan normal form is the most convenient for computation of the matrix functions (though it may be not the best choice for computer computations). Let f(z) be an analytical function of a complex argument. Applying the function on a n×n Jordan block J with eigenvalue λ results in an upper triangular matrix:
so that the elements of the k-th superdiagonal of the resulting matrix are . For a matrix of general Jordan normal form the above expression shall be applied to each Jordan block.
The following example shows the application to the power function f(z) = zn:
where the binomial coefficients are defined as . For integer positive n it reduces to standard definition
of the coefficients. For negative n the identity may be of use.
Compact operators
A result analogous to the Jordan normal form holds for compact operators on a Banach space. One restricts to compact operators because every point x in the spectrum of a compact operator T is an eigenvalue; The only exception is when x is the limit point of the spectrum. This is not true for bounded operators in general. To give some idea of this generalization, we first reformulate the Jordan decomposition in the language of functional analysis.
Holomorphic functional calculus
Let X be a Banach space, L(X) be the bounded operators on X, and σ(T) denote the spectrum of T ∈ L(X). The holomorphic functional calculus is defined as follows:
Fix a bounded operator T. Consider the family Hol(T) of complex functions that is holomorphic on some open set G containing σ(T). Let Γ = {γi} be a finite collection of Jordan curves such that σ(T) lies in the inside of Γ, we define f(T) by
The open set G could vary with f and need not be connected. The integral is defined as the limit of the Riemann sums, as in the scalar case. Although the integral makes sense for continuous f, we restrict to holomorphic functions to apply the machinery from classical function theory (for example, the Cauchy integral formula). The assumption that σ(T) lie in the inside of Γ ensures f(T) is well defined; it does not depend on the choice of Γ. The functional calculus is the mapping Φ from Hol(T) to L(X) given by
We will require the following properties of this functional calculus:
Φ extends the polynomial functional calculus.
The spectral mapping theorem holds: σ(f(T)) = f(σ(T)).
Φ is an algebra homomorphism.
The finite-dimensional case
In the finite-dimensional case, σ(T) = {λi} is a finite discrete set in the complex plane. Let ei be the function that is 1 in some open neighborhood of λi and 0 elsewhere. By property 3 of the functional calculus, the operator
is a projection. Moreover, let νi be the index of λi and
The spectral mapping theorem tells us
has spectrum {0}. By property 1, f(T) can be directly computed in the Jordan form, and by inspection, we see that the operator f(T)ei(T) is the zero matrix.
By property 3, f(T) ei(T) = ei(T) f(T). So ei(T) is precisely the projection onto the subspace
The relation
implies
where the index i runs through the distinct eigenvalues of T. This is the invariant subspace decomposition
given in a previous section. Each ei(T) is the projection onto the subspace spanned by the Jordan chains corresponding to λi and along the subspaces spanned by the Jordan chains corresponding to vj for j ≠ i. In other words, ei(T) = P(λi;T). This explicit identification of the operators ei(T) in turn gives an explicit form of holomorphic functional calculus for matrices:
For all f ∈ Hol(T),
Notice that the expression of f(T) is a finite sum because, on each neighborhood of vi, we have chosen the Taylor series expansion of f centered at vi.
Poles of an operator
Let T be a bounded operator λ be an isolated point of σ(T). (As stated above, when T is compact, every point in its spectrum is an isolated point, except possibly the limit point 0.)
The point λ is called a pole of operator T with order ν if the resolvent function RT defined by
has a pole of order ν at λ.
We will show that, in the finite-dimensional case, the order of an eigenvalue coincides with its index. The result also holds for compact operators.
Consider the annular region A centered at the eigenvalue λ with sufficiently small radius ε such that the intersection of the open disc Bε(λ) and σ(T) is {λ}. The resolvent function RT is holomorphic on A.
Extending a result from classical function theory, RT has a Laurent series representation on A:
where
and C is a small circle centered at λ.
By the previous discussion on the functional calculus,
where is 1 on and 0 elsewhere.
But we have shown that the smallest positive integer m such that
and
is precisely the index of λ, ν(λ). In other words, the function RT has a pole of order ν(λ) at λ.
Numerical analysis
If the matrix A has multiple eigenvalues, or is close to a matrix with multiple eigenvalues, then its Jordan normal form is very sensitive to perturbations. Consider for instance the matrix
If ε = 0, then the Jordan normal form is simply
However, for ε ≠ 0, the Jordan normal form is
This ill conditioning makes it very hard to develop a robust numerical algorithm for the Jordan normal form, as the result depends critically on whether two eigenvalues are deemed to be equal. For this reason, the Jordan normal form is usually avoided in numerical analysis; the stable Schur decomposition or pseudospectra are better alternatives.
See also
Canonical basis
Canonical form
Frobenius normal form
Jordan matrix
Jordan–Chevalley decomposition
Matrix decomposition
Modal matrix
Weyr canonical form
Notes
References
Jordan Canonical Form article at mathworld.wolfram.com
Linear algebra
Matrix theory
Matrix normal forms
Matrix decompositions | Jordan normal form | [
"Mathematics"
] | 5,967 | [
"Linear algebra",
"Algebra"
] |
312,881 | https://en.wikipedia.org/wiki/Action%20%28physics%29 | In physics, action is a scalar quantity that describes how the balance of kinetic versus potential energy of a physical system changes with trajectory. Action is significant because it is an input to the principle of stationary action, an approach to classical mechanics that is simpler for multiple objects. Action and the variational principle are used in Feynman's formulation of quantum mechanics and in general relativity. For systems with small values of action similar to the Planck constant, quantum effects are significant.
In the simple case of a single particle moving with a constant velocity (thereby undergoing uniform linear motion), the action is the momentum of the particle times the distance it moves, added up along its path; equivalently, action is the difference between the particle's kinetic energy and its potential energy, times the duration for which it has that amount of energy.
More formally, action is a mathematical functional which takes the trajectory (also called path or history) of the system as its argument and has a real number as its result. Generally, the action takes different values for different paths. Action has dimensions of energy × time or momentum × length, and its SI unit is joule-second (like the Planck constant h).
Introduction
Introductory physics often begins with Newton's laws of motion, relating force and motion; action is part of a completely equivalent alternative approach with practical and educational advantages. However, the concept took many decades to supplant Newtonian approaches and remains a challenge to introduce to students.
Simple example
For a trajectory of a ball moving in the air on Earth the action is defined between two points in time, and as the kinetic energy (KE) minus the potential energy (PE), integrated over time.
The action balances kinetic against potential energy.
The kinetic energy of a ball of mass is where is the velocity of the ball; the potential energy is where is the gravitational constant. Then the action between and is
The action value depends upon the trajectory taken by the ball through and . This makes the action an input to the powerful stationary-action principle for classical and for quantum mechanics. Newton's equations of motion for the ball can be derived from the action using the stationary-action principle, but the advantages of action-based mechanics only begin to appear in cases where Newton's laws are difficult to apply. Replace the ball with an electron: classical mechanics fails but stationary action continues to work. The energy difference in the simple action definition, kinetic minus potential energy, is generalized and called the Lagrangian for more complex cases.
Planck's quantum of action
The Planck constant, written as or when including a factor of , is called the quantum of action. Like action, this constant has unit of energy times time. It figures in all significant quantum equations, like the uncertainty principle and the de Broglie wavelength. Whenever the value of the action approaches the Planck constant, quantum effects are significant.
History
Pierre Louis Maupertuis and Leonhard Euler working in the 1740s developed early versions of the action principle. Joseph Louis Lagrange clarified the mathematics when he invented the calculus of variations. William Rowan Hamilton made the next big breakthrough, formulating Hamilton's principle in 1853. Hamilton's principle became the cornerstone for classical work with different forms of action until Richard Feynman and Julian Schwinger developed quantum action principles.
Definitions
Expressed in mathematical language, using the calculus of variations, the evolution of a physical system (i.e., how the system actually progresses from one state to another) corresponds to a stationary point (usually, a minimum) of the action.
Action has the dimensions of [energy] × [time], and its SI unit is joule-second, which is identical to the unit of angular momentum.
Several different definitions of "the action" are in common use in physics. The action is usually an integral over time. However, when the action pertains to fields, it may be integrated over spatial variables as well. In some cases, the action is integrated along the path followed by the physical system.
The action is typically represented as an integral over time, taken along the path of the system between the initial time and the final time of the development of the system:
where the integrand L is called the Lagrangian. For the action integral to be well-defined, the trajectory has to be bounded in time and space.
Action (functional)
Most commonly, the term is used for a functional which takes a function of time and (for fields) space as input and returns a scalar. In classical mechanics, the input function is the evolution q(t) of the system between two times t1 and t2, where q represents the generalized coordinates. The action is defined as the integral of the Lagrangian L for an input evolution between the two times:
where the endpoints of the evolution are fixed and defined as and . According to Hamilton's principle, the true evolution qtrue(t) is an evolution for which the action is stationary (a minimum, maximum, or a saddle point). This principle results in the equations of motion in Lagrangian mechanics.
Abbreviated action (functional)
In addition to the action functional, there is another functional called the abbreviated action. In the abbreviated action, the input function is the path followed by the physical system without regard to its parameterization by time. For example, the path of a planetary orbit is an ellipse, and the path of a particle in a uniform gravitational field is a parabola; in both cases, the path does not depend on how fast the particle traverses the path.
The abbreviated action (sometime written as ) is defined as the integral of the generalized momenta,
for a system Lagrangian along a path in the generalized coordinates :
where and are the starting and ending coordinates.
According to Maupertuis's principle, the true path of the system is a path for which the abbreviated action is stationary.
Hamilton's characteristic function
When the total energy E is conserved, the Hamilton–Jacobi equation can be solved with the additive separation of variables:
where the time-independent function W(q1, q2, ..., qN) is called Hamilton's characteristic function. The physical significance of this function is understood by taking its total time derivative
This can be integrated to give
which is just the abbreviated action.
Action of a generalized coordinate
A variable Jk in the action-angle coordinates, called the "action" of the generalized coordinate qk, is defined by integrating a single generalized momentum around a closed path in phase space, corresponding to rotating or oscillating motion:
The corresponding canonical variable conjugate to Jk is its "angle" wk, for reasons described more fully under action-angle coordinates. The integration is only over a single variable qk and, therefore, unlike the integrated dot product in the abbreviated action integral above. The Jk variable equals the change in Sk(qk) as qk is varied around the closed path. For several physical systems of interest, Jk is either a constant or varies very slowly; hence, the variable Jk is often used in perturbation calculations and in determining adiabatic invariants. For example, they are used in the calculation of planetary and satellite orbits.
Single relativistic particle
When relativistic effects are significant, the action of a point particle of mass m travelling a world line C parametrized by the proper time is
If instead, the particle is parametrized by the coordinate time t of the particle and the coordinate time ranges from t1 to t2, then the action becomes
where the Lagrangian is
Action principles and related ideas
Physical laws are frequently expressed as differential equations, which describe how physical quantities such as position and momentum change continuously with time, space or a generalization thereof. Given the initial and boundary conditions for the situation, the "solution" to these empirical equations is one or more functions that describe the behavior of the system and are called equations of motion.
Action is a part of an alternative approach to finding such equations of motion. Classical mechanics postulates that the path actually followed by a physical system is that for which the action is minimized, or more generally, is stationary. In other words, the action satisfies a variational principle: the principle of stationary action (see also below). The action is defined by an integral, and the classical equations of motion of a system can be derived by minimizing the value of that integral.
The action principle provides deep insights into physics, and is an important concept in modern theoretical physics. Various action principles and related concepts are summarized below.
Maupertuis's principle
In classical mechanics, Maupertuis's principle (named after Pierre Louis Maupertuis) states that the path followed by a physical system is the one of least length (with a suitable interpretation of path and length). Maupertuis's principle uses the abbreviated action between two generalized points on a path.
Hamilton's principal function
Hamilton's principle states that the differential equations of motion for any physical system can be re-formulated as an equivalent integral equation. Thus, there are two distinct approaches for formulating dynamical models.
Hamilton's principle applies not only to the classical mechanics of a single particle, but also to classical fields such as the electromagnetic and gravitational fields. Hamilton's principle has also been extended to quantum mechanics and quantum field theory—in particular the path integral formulation of quantum mechanics makes use of the concept—where a physical system explores all possible paths, with the phase of the probability amplitude for each path being determined by the action for the path; the final probability amplitude adds all paths using their complex amplitude and phase.
Hamilton–Jacobi equation
Hamilton's principal function is obtained from the action functional by fixing the initial time and the initial endpoint while allowing the upper time limit and the second endpoint to vary. The Hamilton's principal function satisfies the Hamilton–Jacobi equation, a formulation of classical mechanics. Due to a similarity with the Schrödinger equation, the Hamilton–Jacobi equation provides, arguably, the most direct link with quantum mechanics.
Euler–Lagrange equations
In Lagrangian mechanics, the requirement that the action integral be stationary under small perturbations is equivalent to a set of differential equations (called the Euler–Lagrange equations) that may be obtained using the calculus of variations.
Classical fields
The action principle can be extended to obtain the equations of motion for fields, such as the electromagnetic field or gravitational field.
Maxwell's equations can be derived as conditions of stationary action.
The Einstein equation utilizes the Einstein–Hilbert action as constrained by a variational principle. The trajectory (path in spacetime) of a body in a gravitational field can be found using the action principle. For a free falling body, this trajectory is a geodesic.
Conservation laws
Implications of symmetries in a physical situation can be found with the action principle, together with the Euler–Lagrange equations, which are derived from the action principle. An example is Noether's theorem, which states that to every continuous symmetry in a physical situation there corresponds a conservation law (and conversely). This deep connection requires that the action principle be assumed.
Path integral formulation of quantum field theory
In quantum mechanics, the system does not follow a single path whose action is stationary, but the behavior of the system depends on all permitted paths and the value of their action. The action corresponding to the various paths is used to calculate the path integral, which gives the probability amplitudes of the various outcomes.
Although equivalent in classical mechanics with Newton's laws, the action principle is better suited for generalizations and plays an important role in modern physics. Indeed, this principle is one of the great generalizations in physical science. It is best understood within quantum mechanics, particularly in Richard Feynman's path integral formulation, where it arises out of destructive interference of quantum amplitudes.
Modern extensions
The action principle can be generalized still further. For example, the action need not be an integral, because nonlocal actions are possible. The configuration space need not even be a functional space, given certain features such as noncommutative geometry. However, a physical basis for these mathematical extensions remains to be established experimentally.
See also
Calculus of variations
Functional derivative
Functional integration
Hamiltonian mechanics
Lagrangian
Lagrangian mechanics
Measure (physics)
Noether's theorem
Path integral formulation
Principle of least action
Principle of maximum entropy
Some actions:
Nambu–Goto action
Polyakov action
Bagger–Lambert–Gustavsson action
Einstein–Hilbert action
References
Further reading
The Cambridge Handbook of Physics Formulas, G. Woan, Cambridge University Press, 2010, .
Dare A. Wells, Lagrangian Dynamics, Schaum's Outline Series (McGraw-Hill, 1967) , A 350-page comprehensive "outline" of the subject.
External links
Principle of least action interactive Interactive explanation/webpage
Lagrangian mechanics
Hamiltonian mechanics
Calculus of variations
Dynamics (mechanics) | Action (physics) | [
"Physics",
"Mathematics"
] | 2,670 | [
"Scalar physical quantities",
"Physical phenomena",
"Mechanical quantities",
"Physical quantities",
"Action (physics)",
"Theoretical physics",
"Lagrangian mechanics",
"Classical mechanics",
"Hamiltonian mechanics",
"Motion (physics)",
"Dynamics (mechanics)",
"Dynamical systems"
] |
312,903 | https://en.wikipedia.org/wiki/Excavator | Excavators are heavy construction equipment primarily consisting of a boom, dipper (or stick), bucket, and cab on a rotating platform known as the "house".
The modern excavator's house sits atop an undercarriage with tracks or wheels, being an evolution of the steam shovel (which itself evolved into the power shovel when steam was replaced by diesel and electric power). All excavation-related movement and functions of a hydraulic excavator are accomplished through the use of hydraulic fluid, with hydraulic cylinders and hydraulic motors, which replaced winches, chains, and steel ropes. Another principle change was the direction of the digging action, with modern excavators pulling their buckets toward them like a dragline rather than pushing them away to fill them the way the first powered shovels did.
Terminology
Excavators are also called diggers, scoopers, mechanical shovels, or 360-degree excavators (sometimes abbreviated simply to "360"). Tracked excavators are sometimes called "trackhoes" by analogy to the backhoe. In the UK, wheeled excavators are sometimes known as "rubber ducks".
Usage
Excavators are used in many ways:
Digging of trenches, holes, foundations
Material handling
Brush cutting with hydraulic saw, mower, and stump removal attachments
Forestry work
Forestry mulching
Demolition with hydraulic claw, cutter and breaker attachments
Mining, especially, but not only open-pit mining
River dredging
Hydro excavation to access fragile underground infrastructure using high pressure water
Driving piles, in conjunction with a pile driver
Drilling shafts for footings and rock blasting, by use of an auger or hydraulic drill attachment
Snow removal with snowplow and snow blower attachments
Aircraft recycling
Configurations
Modern hydraulic excavators come in a wide variety of sizes. The smaller ones are called mini or compact excavators. For example, Caterpillar's smallest mini-excavator weighs and has 13 hp; their largest model is the largest excavator available (developed and produced by the Orenstein & Koppel, Germany, until the takeover 2011 by Caterpillar, named »RH400«), the CAT 6090, which weighs in excess of , has 4500 hp, and a bucket as large as 52.0 m3.
Hydraulic excavators usually couple engine power to (commonly) three hydraulic pumps rather than to mechanical drivetrains. The two main pumps supply oil at high pressure (up to 5000 psi, 345 bar) for the arms, swing motor, track motors and accessories while the third is a lower pressure (≈700 psi, 48 bar) pump for pilot control of the spool valves; this third circuit allows for reduced physical effort when operating the controls. Generally, the 3 pumps used in excavators consist of 2 variable displacement piston pumps and a gear pump. The arrangement of the pumps in the excavator unit changes with different manufacturers using different formats.
The three main sections of an excavator are the undercarriage, the house and the arm. The boom, the front part that is attached to the cab itself and holds the arm, is also used. The undercarriage includes tracks, track frame, and final drives, which have a hydraulic motor and gearing providing the drive to the individual tracks. The undercarriage, especially frequently for a mini-excavator, can also have blade similar to that of a bulldozer. The house includes the operator cab, counterweight, engine, fuel and hydraulic oil tanks. The house attaches to the undercarriage by way of a center pin. High-pressure oil is supplied to the tracks' hydraulic motors through a hydraulic swivel at the axis of the pin, allowing the machine to slew 360° unhindered and thus provides the left-and-right movement. The arm provides the up-and-down and closer-and-further (or digging movement) movements. Arms typically consist of a boom, stick and bucket with three joints between them and the house.
The boom attaches to the house and provides the up-and-down movement. It can be one of several different configurations:
Most common are mono booms; these have no movement apart from straight up and down.
Some others have a knuckle boom which can also move left and right in line with the machine.
Another option is a hinge at the base of the boom allowing it to hydraulically pivot up to 180° independent to the house; however, this is generally available only to compact excavators.
Variable angle booms have additional joint in the middle of the boom to change the curvature of the boom. These are also called triple-articulated booms (TAB) or 3 piece booms.
Attached to the end of the boom is the stick (or dipper arm). The stick provides the digging movement needed to pull the bucket through the ground. The stick length is optional depending whether reach (longer stick) or break-out power (shorter stick) is required. Most common is mono stick but there are also, for example, telescopic sticks. The largest form ever of an excavator, the dragline excavator, eliminated the dipper in favor of a line and winch.
On the end of the stick is usually a bucket. A wide, large capacity (mud) bucket with a straight cutting edge is used for cleanup and levelling or where the material to be dug is soft, and teeth are not required. A general purpose (GP) bucket is generally smaller, stronger, and has hardened side cutters and teeth used to break through hard ground and rocks. Buckets have numerous shapes and sizes for various applications. There are also many other attachments that are available to be attached to the excavator for boring, ripping, crushing, cutting, lifting, etc. Attachments can be attached with pins similar to other parts of the arm or with some variety of quick coupler. Excavators in Scandinavia often feature a tiltrotator which allows attachments rotate 360 degrees and tilt +/- 45 degrees, in order to increase the flexibility and precision of the excavator.
Before the 1990s, all excavators had a long or conventional counterweight that hung off the rear of the machine to provide more digging force and lifting capacity. This became a nuisance when working in confined areas. In 1993 Yanmar launched the world's first Zero Tail Swing excavator, which allows the counterweight to stay inside the width of the tracks as it slews, thus being safer and more user friendly when used in a confined space. This type of machine is now widely used throughout the world.
There are two main types of control configuration used in excavators to control the boom and bucket, each distributing the four primary digging functions across two x-y joysticks. This allows a skilled operator to control all four functions simultaneously. The most popular configuration in the US is the SAE controls configuration while in other parts of the world, the ISO control configuration is more common. Some manufacturers such as Takeuchi have switches that allow the operator to select which control configuration to use.
Excavator attachments
Hydraulic excavators now perform tasks well beyond bucket excavation. With the advent of hydraulic-powered attachments such as a breaker, a cutter, a grapple or an auger,a crusher and screening buckets the excavator is frequently used in many applications other than excavation. Many excavators feature a quick coupler for simplified attachment mounting, increasing the machine's utilization on the jobsite. Excavators are usually employed together with loaders and bulldozers. Most wheeled, compact and some medium-sized (11 to 18-tonne) excavators have a backfill (or dozer) blade. This is a horizontal bulldozer-like blade attached to the undercarriage and is used for leveling and pushing removed material back into a hole.
Notable manufacturers
Current manufacturers
As of July 2021, current excavator manufacturers include:
See also
Types of excavator
Crawler Excavator
Compact excavator
Dragline excavator
Long reach excavator
Amphibious excavator
Power shovel
Steam shovel
Suction excavator
Walking excavator
Bucket-wheel excavator
Other
Bulldozer
Civil engineering
Feller buncher
Heavy equipment
Loader
Mining simulation
Tractor
Skid-steer loader
SAE controls
References
External links
Excavators
Excavating equipment
Heavy equipment
Mining equipment
Tracked vehicles | Excavator | [
"Engineering"
] | 1,766 | [
"Engineering vehicles",
"Excavating equipment",
"Mining equipment"
] |
312,910 | https://en.wikipedia.org/wiki/Chaff | Chaff (; ) is dry, scale-like plant material such as the protective seed casings of cereal grains, the scale-like parts of flowers, or finely chopped straw. Chaff cannot be digested by humans, but it may be fed to livestock, ploughed into soil, or burned.
Etymology
"Chaff" comes from Middle English , from Old English , related to Old High German , "husk".
Grain chaff
In grasses (including cereals such as rice, barley, oats, and wheat), the ripe seed is surrounded by thin, dry, scaly bracts (called glumes, lemmas, and paleas), forming a dry husk (or hull) around the grain. Once it is removed, it is often referred to as chaff.
In wild cereals and in the primitive domesticated einkorn, emmer and spelt wheats, the husks enclose each seed tightly. Before the grain can be used, the husks must be removed.
The process of loosening the chaff from the grain so as to remove it is called "threshing" before "drying" – traditionally done by milling or pounding, making it finer like "flour". Separating remaining loose chaff from the grain is called "winnowing" – traditionally done by repeatedly tossing the grain up into a light wind, which gradually blows the lighter chaff away. This method typically uses a broad, plate-shaped basket or similar receptacle to hold and collect the winnowed grain as it falls back down.
Domesticated grains such as durum and common wheat have been bred to have chaff that is easily removed. These varieties are known as "free-threshing" or "naked".
Chaff should not be confused with bran, which is a finer, scaly material that is part of the grain itself.
Straw chaff
Chaff is also made by chopping straw (or sometimes coarse hay) into very short lengths, using a machine called a chaff cutter. Like grain chaff, it is used as animal feed and is a way of making coarse fodder more palatable for livestock.
Coffee chaff
Coffee chaff is produced from the so called silverskin, the thin inner-parchment layer on dried coffee beans, in the process of grinding coffee beans.
Botany
In botany, chaff refers to the thin receptacular bracts of many species in the sunflower family Asteraceae and related families. They are modified scale-like leaves surrounding single florets in the flower-head.
Metaphor
Chaff as a waste product from grain processing leads to a metaphorical use of the term, to refer to something seen as worthless. In the Bible, such use is found in Job 13:25, Isaiah 33:11, Psalm 83:13-15, and other places. Chaff also lends its name to a radar countermeasure, composed of small particles dropped from an aircraft.
Use
Hungarian engineer László Schremmer has discovered that the use of chaff-based filters can reduce the arsenic content of water to 3 microgram/litre. This is especially important in areas where the potable water is provided by filtering the water extracted from an underground aquifer.
See also
Awn (botany)
Bran
Biomass
Combine harvester
Rice hulls
Rice huller
Sifting
References
Plant morphology
Fodder
Waste
ca:Espícula#Glumel·les | Chaff | [
"Physics",
"Biology"
] | 711 | [
"Plants",
"Plant morphology",
"Materials",
"Waste",
"Matter"
] |
312,937 | https://en.wikipedia.org/wiki/Fusarium%20oxysporum | Fusarium oxysporum (Schlecht as emended by Snyder and Hansen), an ascomycete fungus, comprises all the species, varieties and forms recognized by Wollenweber and Reinking within an infrageneric grouping called section Elegans. It is part of the family Nectriaceae.
Although their predominant role in native soils may be as harmless or even beneficial plant endophytes or soil saprophytes, many strains within the F. oxysporum complex are soil borne pathogens of plants, especially in agricultural settings.
Taxonomy
While the species, as defined by Snyder and Hansen, has been widely accepted for more than 50 years, more recent work indicates this taxon is actually a genetically heterogeneous polytypic morphospecies, whose strains represent some of the most abundant and widespread microbes of the global soil microflora.
Genome
The family of transposable elements was first discovered by Daboussi et al., 1992 in several formae speciales and Davière et al., 2001 and Langin et al., 2003 have since found them in most strains at copy numbers as high as 100.
Habitat
These diverse and adaptable fungi have been found in soils ranging from the Sonoran Desert, to tropical and temperate forest, grasslands and soils of the tundra. F. oxysporum strains are ubiquitous soil inhabitants that have the ability to exist as saprophytes, and degrade lignin and complex carbohydrates associated with soil debris. They are pervasive plant endophytes that can colonize plant roots and may even protect plants or form the basis of disease suppression.
Because the hosts of a given forma specialis usually are closely related, many have assumed that members of a forma specialis are also closely related and descended from a common ancestor. However, results from research conducted on Fusarium oxysporum f. sp. cubense forced scientists to question these assumptions. Researchers used anonymous, single-copy restriction fragment length polymorphsims (RFLPs) to identify 10 clonal lineages from a collection of F. oxysporum f.sp. cubense from across the world. These results showed that pathogens of banana causing Panama disease could be as closely related to other host's pathogens, such as melon or tomato, as they are to each other. Exceptional amounts of genetic diversity within F. oxysporum f.sp. cubense were deduced from the high level of chromosomal polymorphisms found among strains, random amplified polymorphic DNA fingerprints and from the number and geographic distribution of vegetative compatibility groups.
Pathogen
Presented with the wide-ranging occurrence of F. oxysporum strains that are nonpathogenic, it is reasonable to conclude that certain pathogenic forms were descended from originally nonpathogenic ancestors. Given the association of these fungi with plant roots, a form that is able to grow beyond the cortex and into the xylem could exploit this ability and hopefully gain an advantage over fungi that are restricted to the cortex.
The progression of a fungus into vascular tissue may elicit an immediate host response, successfully restricting the invader; or an otherwise ineffective or delayed response, reducing the vital water-conducting capacity and induce wilting. On the other hand, the plant might be able to tolerate limited growth of the fungus within xylem vessels, preceded by an endophytic association. In this case, any further changes in the host or parasite could disturb the relationship, in a way that fungal activities or a host response would result in the generation of disease symptoms.
Pathogenic strains of F. oxysporum have been studied for more than 100 years. The host range of these fungi is broad and includes animals, ranging from arthropods to humans, as well as plants, including a range of both gymnosperms and angiosperms. While collectively, plant pathogenic F. oxysporum strains have a broad host range, individual isolates usually cause disease only in a narrow range of plant species. This observation has led to the idea of "special form" or forma specialis in F. oxysporum. Formae speciales have been defined as "…an informal rank in Classification… used for parasitic fungi characterized from a physiological standpoint (e.g. by the ability to cause disease in particular hosts) but scarcely or not at all from a morphological standpoint." Exhaustive host range studies have been conducted for relatively few formae speciales of F. oxysporum. For more information on Fusarium oxysporum as a plant pathogen, see Fusarium wilt and Koa wilt.
Different strains of F. oxysporum have been used for the purpose of producing nanomaterials (especially Silver nanoparticles).
"Agent Green" in Colombia
In 2000, the government of Colombia proposed dispersing strains of Crivellia and Fusarium oxysporum, also known as Agent Green, as a biological weapon to forcibly eradicate coca and other illegal crops. The weaponized strains were developed by the US government, who originally conditioned their approval of Plan Colombia on the use of this weapon, but ultimately withdrew the condition. In February 2001, the EU Parliament issued a declaration specifically against the use of these biological agents in warfare.
Gold interactions
The fungus has the ability to dissolve gold, then precipitate it onto its surface, encrusting itself with gold. This phenomenon was first observed in Boddington, West Australia. As a result of this discovery, F. oxysporum is currently being evaluated as a possible way to help detect hidden underground gold reserves. It also is used to manufacture gold nanoparticles.
Formae speciales
Fusarium oxysporum f.sp. albedinis
Fusarium oxysporum f.sp. asparagi
Fusarium oxysporum f.sp. batatas
Fusarium oxysporum f.sp. betae
Fusarium oxysporum f.sp. cattleyae
Fusarium oxysporum f.sp. cannabis
Fusarium oxysporum f.sp. cepae
Fusarium oxysporum f.sp. ciceris
Fusarium oxysporum f.sp. citri
Fusarium oxysporum f.sp. coffea
Fusarium oxysporum f.sp. cubense
Fusarium oxysporum f.sp. cyclaminis
Fusarium oxysporum f.sp. herbemontis
Fusarium oxysporum f.sp. dianthi
Fusarium oxysporum f. sp. fragariae
Fusarium oxysporum f.sp. gladioli
Fusarium oxysporum f.sp. koae
Fusarium oxysporum f.sp. lactucae
Fusarium oxysporum f.sp. lentis
Fusarium oxysporum f.sp. lilli
Fusarium oxysporum f.sp. lini
Fusarium oxysporum f.sp. lycopersici
Fusarium oxysporum f.sp. medicaginis
Fusarium oxysporum f.sp. melonis
Fusarium oxysporum f.sp. momordicae
Fusarium oxysporum f.sp. nicotianae
Fusarium oxysporum f.sp. niveum
Fusarium oxysporum f.sp. palmarum
Fusarium oxysporum f.sp. passiflorae
Fusarium oxysporum f.sp. perniciosum
Fusarium oxysporum f.sp. phaseoli
Fusarium oxysporum f.sp. pisi
Fusarium oxysporum f.sp. radicis-lycopersici
Fusarium oxysporum f.sp. ricini
Fusarium oxysporum f.sp. strigae
Fusarium oxysporum f.sp. tuberosi
Fusarium oxysporum f.sp. tulipae
Fusarium oxysporum f.sp. vasinfectum
See also
Mycoherbicide
References
oxysporum
Plant pathogens and diseases
Agricultural soil science
Soil biology | Fusarium oxysporum | [
"Biology"
] | 1,750 | [
"Plant pathogens and diseases",
"Soil biology",
"Plants"
] |
312,943 | https://en.wikipedia.org/wiki/Blacklight | A blacklight, also called a UV-A light, Wood's lamp, or ultraviolet light, is a lamp that emits long-wave (UV-A) ultraviolet light and very little visible light. One type of lamp has a violet filter material, either on the bulb or in a separate glass filter in the lamp housing, which blocks most visible light and allows through UV, so the lamp has a dim violet glow when operating. Blacklight lamps which have this filter have a lighting industry designation that includes the letters "BLB". This stands for "blacklight blue". A second type of lamp produces ultraviolet but does not have the filter material, so it produces more visible light and has a blue color when operating. These tubes are made for use in "bug zapper" insect traps, and are identified by the industry designation "BL". This stands for "blacklight".
Blacklight sources may be specially designed fluorescent lamps, mercury-vapor lamps, light-emitting diodes (LEDs), lasers, or incandescent lamps. In medicine, forensics, and some other scientific fields, such a light source is referred to as a Wood's lamp, named after Robert Williams Wood, who invented the original Wood's glass UV filters.
Although many other types of lamp emit ultraviolet light with visible light, blacklights are essential when UV-A light without visible light is needed, particularly in observing fluorescence, the colored glow that many substances emit when exposed to UV. They are employed for decorative and artistic lighting effects, diagnostic and therapeutic uses in medicine, the detection of substances tagged with fluorescent dyes, rock-hunting, scorpion-hunting, the detection of counterfeit money, the curing of plastic resins, attracting insects and the detection of refrigerant leaks affecting refrigerators and air conditioning systems. Strong sources of long-wave ultraviolet light are used in tanning beds.
Medical hazard
UV-A presents a potential hazard when eyes and skin are exposed, especially to high power sources. According to the World Health Organization, UV-A is responsible for the initial tanning of skin and it contributes to skin ageing and wrinkling. UV-A may also contribute to the progression of skin cancers. Additionally, UV-A can have negative effects on eyes in both the short-term and long-term.
Types
Fluorescent
Fluorescent blacklight tubes are typically made in the same fashion as normal fluorescent tubes except that a phosphor that emits UVA light instead of visible white light is used on the inside of the tube. The type most commonly used for blacklights, designated blacklight blue or "BLB" by the industry, has a dark blue filter coating on the tube, which filters out most visible light, so that fluorescence effects can be observed. These tubes have a dim violet glow when operating. They should not be confused with "blacklight" or "BL" tubes, which have no filter coating, and have a brighter blue color. These are made for use in "bug zapper" insect traps where the emission of visible light does not interfere with the performance of the product. The phosphor typically used for a near 368 to 371 nanometer emission peak is either europium-doped strontium fluoride (:) or europium-doped strontium borate (:) while the phosphor used to produce a peak around 350 to 353 nanometres is lead-doped barium silicate (:). "Blacklight blue" lamps peak at 365 nm.
Manufacturers use different numbering systems for blacklight tubes. Philips' is becoming outdated (as of 2010), while the (German) Osram system is becoming dominant outside North America. The following table lists the tubes generating blue, UVA and UVB, in order of decreasing wavelength of the most intense peak. Approximate phosphor compositions, major manufacturer's type numbers and some uses are given as an overview of the types available. "Peak" position is approximated to the nearest 10 nm. "Width" is the measure between points on the shoulders of the peak that represent 50% intensity.
Bug zappers
Another class of UV fluorescent bulb is designed for use in bug zappers. Insects are attracted to the UV light, which they are able to see, and are then electrocuted by the device. These bulbs use the same UV-A emitting phosphor blend as the filtered blacklight, but since they do not need to suppress visible light output, they do not use a purple filter material in the bulb. Plain glass blocks out less of the visible mercury emission spectrum, making them appear light blue-violet to the naked eye. These lamps are referred to by the designation "blacklight" or "BL" in some North American lighting catalogs. These types are not suitable for applications which require the low visible light output of "BLB" tubes lamps.
Incandescent
A blacklight may also be formed by simply using a UV filter coating such as Wood's glass on the envelope of a common incandescent bulb. This was the method that was used to create the very first blacklight sources. Although incandescent bulbs are a cheaper alternative to fluorescent tubes, they are exceptionally inefficient at producing UV light since most of the light emitted by the filament is visible light which must be blocked. Due to its black body spectrum, an incandescent light radiates less than 0.1% of its energy as UV light. Incandescent UV bulbs, due to the necessary absorption of the visible light, become very hot during use. This heat is, in fact, encouraged in such bulbs, since a hotter filament increases the proportion of UVA in the black-body radiation emitted. This high running-temperature reduces the life of the lamp from a typical 1,000 hours to around 100 hours.
Mercury vapor
High-power mercury vapor blacklight lamps are made in power ratings of 100 to 1,000 watts. These do not use phosphors, but rely on the intensified and slightly broadened 350–375 nm spectral line of mercury from high pressure discharge at between , depending upon the specific type. These lamps use envelopes of Wood's glass or similar optical filter coatings to block out all the visible light and also the short wavelength (UVC) lines of mercury at 184.4 and 253.7 nm, which are harmful to the eyes and skin. A few other spectral lines, falling within the pass band of the Wood's glass between 300 and 400 nm, contribute to the output.
These lamps are used mainly for theatrical purposes and concert displays. They are more efficient UVA producers per unit of power consumption than fluorescent tubes.
LED
Ultraviolet light can be generated by some light-emitting diodes, but wavelengths shorter than 380 nm are uncommon, and the emission peaks are broad, so only the very lowest energy UV photons are emitted, within predominant not visible light.
Safety
Although blacklights produce light in the UV range, their spectrum is mostly confined to the longwave UVA region, that is, UV radiation nearest in wavelength to visible light, with low frequency and therefore relatively low energy. While low, there is still some power of a conventional blacklight in the UVB range. UVA is the safest of the three spectra of UV light, although high exposure to UVA has been linked to the development of skin cancer in humans. The relatively low energy of UVA light does not cause sunburn. It can damage collagen fibers, so may accelerate skin aging and cause wrinkles. It can also degrade vitamin A in the skin.
UVA light has been shown to cause DNA damage, but not directly, like UVB and UVC. Due to its longer wavelength, it is absorbed less and reaches deeper into skin layers, where it produces reactive chemical intermediates such as hydroxyl and oxygen radicals, which in turn can damage DNA and result in a risk of melanoma. The weak output of blacklights is not sufficient to cause DNA damage or cellular mutations in the way that direct summer sunlight can, although there are reports that overexposure to the type of UV radiation used for creating artificial suntans on sunbeds can cause DNA damage, photo-aging (damage to the skin from prolonged exposure to sunlight), toughening of the skin, suppression of the immune system, cataract formation and skin cancer.
UV-A can have negative effects on eyes in both the short-term and long-term.
Uses
Ultraviolet radiation is invisible to the human eye, but illuminating certain materials with UV radiation causes the emission of visible light, causing these substances to glow with various colors. This is called fluorescence, and has many practical uses. Blacklights are required to observe fluorescence, since other types of ultraviolet lamps emit visible light which drowns out the dim fluorescent glow.
Medical applications
A Wood's lamp is a diagnostic tool used in dermatology by which ultraviolet light is shone (at a wavelength of approximately 365 nanometers) onto the skin of the patient; a technician then observes any subsequent fluorescence. For example, porphyrins—associated with some skin diseases—will fluoresce pink. Though the technique for producing a source of ultraviolet light was devised by Robert Williams Wood in 1903 using "Wood's glass", it was in 1925 that the technique was used in dermatology by Margarot and Deveze for the detection of fungal infection of hair. It has many uses, both in distinguishing fluorescent conditions from other conditions and in locating the precise boundaries of the condition.
Fungal and bacterial infections
It is also helpful in diagnosing:
Fungal infections. Some forms of tinea, such as Trichophyton tonsurans, do not fluoresce.
Bacterial infections
Corynebacterium minutissimum is coral red
Pseudomonas is yellow-green
Cutibacterium acnes, a bacterium involved in acne causation, exhibits an orange glow under a Wood's lamp.
Ethylene glycol poisoning
A Wood's lamp may be used to rapidly assess whether an individual is suffering from ethylene glycol poisoning as a consequence of antifreeze ingestion. Manufacturers of ethylene glycol-containing antifreezes commonly add fluorescein, which causes the patient's urine to fluoresce under Wood's lamp.
Diagnosis
Wood's lamp is useful in diagnosing conditions such as tuberous sclerosis and erythrasma (caused by Corynebacterium minutissimum, see above). Additionally, detection of porphyria cutanea tarda can sometimes be made when urine turns pink upon illumination with Wood's lamp. Wood's lamps have also been used to differentiate hypopigmentation from depigmentation such as with vitiligo. A vitiligo patient's skin will appear yellow-green or blue under the Wood's lamp. Its use in detecting melanoma has been reported.
Security and authentication
Blacklight is commonly used to authenticate oil paintings, antiques and banknotes. It can also differentiate real currency from counterfeit notes because, in many countries, legal banknotes have fluorescent symbols on them that only show under a blacklight. In addition, the paper used for printing money does not contain any of the brightening agents which cause commercially available papers to fluoresce under blacklight. Both of these features make illegal notes easier to detect and more difficult to successfully counterfeit. The same security features can be applied to identification cards such as passports or driver's licenses.
Other security applications include the use of pens containing a fluorescent ink, generally with a soft tip, that can be used to "invisibly" mark items. If the objects that are so marked are subsequently stolen, a blacklight can be used to search for these security markings. At some amusement parks, nightclubs and at other, day-long (or night-long) events, a fluorescent mark is rubber stamped onto the wrist of a guest who can then exercise the option of leaving and being able to return again without paying another admission fee.
Biology
Fluorescent materials are also very widely used in numerous applications in molecular biology, often as "tags" which bind themselves to a substance of interest (for example, DNA), so allowing their visualization.
Thousands of moth and insect collectors all over the world use various types of blacklights to attract moth and insect specimens for photography and collecting. It is one of the preferred light sources for attracting insects and moths at night. They can illuminate animal excreta, such as urine and vomit, that is not always visible to the naked eye.
Fault detection
Blacklight is used extensively in non-destructive testing. Fluorescing fluids are applied to metal structures and illuminated, allowing easy detection of cracks and other weaknesses.
If a leak is suspected in a refrigerator or an air conditioning system, a UV tracer dye can be injected into the system along with the compressor lubricant oil and refrigerant mixture. The system is then run in order to circulate the dye across the piping and components and then the system is examined with a blacklight lamp. Any evidence of fluorescent dye then pinpoints the leaking part which needs replacement.
Art and decor
Blacklight is used to illuminate pictures painted with fluorescent colors, particularly on black velvet, which intensifies the illusion of self-illumination. The use of such materials, often in the form of tiles viewed in a sensory room under UV light, is common in the United Kingdom for the education of students with profound and multiple learning difficulties. Such fluorescence from certain textile fibers, especially those bearing optical brightener residues, can also be used for recreational effect, as seen, for example, in the opening credits of the James Bond film A View to a Kill. Blacklight puppetry is performed in a blacklight theater.
Mineral identification
Blacklights are a common tool for rock-hunting and identification of minerals by their fluorescence. The most common minerals and rocks that glow under UV light are fluorite, calcite, aragonite, opal, apatite, chalcedony, corundum (ruby and sapphire), scheelite, selenite, smithsonite, sphalerite, sodalite. The first person to observe fluorescence in minerals was George Stokes in 1852. He noted the ability of fluorite to produce a blue glow when illuminated with ultraviolet light and called this phenomenon “fluorescence” after the mineral fluorite. Lamps used to visualise seams of fluorite and other fluorescent minerals are commonly used in mines but they tend to be on an industrial scale. The lamps need to be short wavelength to be useful for this purpose and of scientific grade. UVP range of hand held UV lamps are ideal for this purpose and are used by Geologists to identify the best sources of fluorite in mines or potential new mines. Some transparent selenite crystals exhibit an “hourglass” pattern under UV light that is not visible in natural light. These crystals are also phosphorescent. Limestone, marble, and travertine can glow because of calcite presence. Granite, syenite, and granitic pegmatite rocks can also glow.
Curing resins
UV light can be used to harden particular glues, resins and inks by causing a photochemical reaction inside those substances. This process of hardening is called ‘curing’. UV curing is adaptable to printing, coating, decorating, stereolithography, and in the assembly of a variety of products and materials. In comparison to other technologies, curing with UV energy may be considered a low-temperature process, a high-speed process, and is a solventless process, as cure occurs via direct polymerization rather than by evaporation. Originally introduced in the 1960s, this technology has streamlined and increased automation in many industries in the manufacturing sector. A primary advantage of curing with ultraviolet light is the speed at which a material can be processed. Speeding up the curing or drying step in a process can reduce flaws and errors by decreasing time that an ink or coating spends wet. This can increase the quality of a finished item, and potentially allow for greater consistency. Another benefit to decreasing manufacturing time is that less space needs to be devoted to storing items which can not be used until the drying step is finished.
Because UV energy has unique interactions with many different materials, UV curing allows for the creation of products with characteristics not achievable via other means. This has led to UV curing becoming fundamental in many fields of manufacturing and technology, where changes in strength, hardness, durability, chemical resistance, and many other properties are required.
Cockpit lighting, LSD testing and tanning
One of the innovations for night and all-weather flying used by the US, UK, Japan and Germany during World War II was the use of UV interior lighting to illuminate the instrument panel, giving a safer alternative to the radium-painted instrument faces and pointers, and an intensity that could be varied easily and without visible illumination that would give away an aircraft's position. This went so far as to include the printing of charts that were marked in UV-fluorescent inks, and the provision of UV-visible pencils and slide rules such as the E6B.
They may also be used to test for LSD, which fluoresces under blacklight while common substitutes such as 25I-NBOMe do not.
Strong sources of long-wave ultraviolet light are used in tanning beds.
See also
Blacklight poster
List of light sources
Footnotes
References
External links
http://mississippientomologicalmuseum.org.msstate.edu/collecting.preparation.methods/Blacklight.traps.htm
American inventions
Articles containing video clips
Luminescence
Types of lamp
Ultraviolet radiation | Blacklight | [
"Physics",
"Chemistry"
] | 3,690 | [
"Luminescence",
"Molecular physics",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Ultraviolet radiation"
] |
312,976 | https://en.wikipedia.org/wiki/N%C3%BCwa | Nüwa, also read Nügua, is a mother goddess, culture hero, and/or member of the Three Sovereigns of Chinese mythology. She is a goddess in Chinese folk religion, Chinese Buddhism, Confucianism and Taoism. She is credited with creating humanity and repairing the Pillar of Heaven.
As creator of mankind, she molded humans individually by hand with yellow clay. In other stories where she fulfills this role, she only created nobles and/or the rich out of yellow soil. The stories vary on the other details about humanity's creation, but it was a tradition commonly believed in ancient China that she created commoners from brown mud. A story holds that she was tired when she created "the rich and the noble", so all others, or "cord-made people", were created from her "dragg[ing] a string through mud".
In the Huainanzi, there is a description of a great battle between deities that broke the pillars supporting Heaven and caused great devastation. There was great flooding, and Heaven had collapsed. Nüwa was the one who patched the holes in Heaven with five colored stones, and she used the legs of a tortoise to mend the pillars.
There are many instances of her in literature across China which detail her in creation stories, and today, she remains a figure important to Chinese culture. She is one of the most venerated Chinese goddesses alongside Guanyin and Mazu.
In Chinese mythology, the goddess Nüwa is a legendary progenitor of all human beings. She also creates a magic stone. Her husband Fu Xi is suggested to be the progenitor of divination and the patron saint of numbers.
Name
The character nü () is a common prefix on the names of goddesses. The proper name is wa, also read as gua (). The Chinese character is unique to this name. Birrell translates it as 'lovely', but notes that it "could be construed as 'frog, which is consistent with her aquatic myth. In Chinese, the word for 'whirlpool' is wo (), which shares the same pronunciation with the word for 'snail' (). These characters all have their right side constructed by the word wa (), which can be translated as 'spiral' or 'helix' as noun, and as 'spin' or 'rotate' when as verb, to describe the 'helical movement'. This mythical meaning has also been symbolically pictured as compasses in the hand which can be found on many paintings and portraits associated with her.
Her reverential name is Wahuang ().
Description
The Huainanzi relates Nüwa to the time when Heaven and Earth were in disruption:
The catastrophes were supposedly caused by the battle between the deities Gonggong and Zhuanxu (an event that was mentioned earlier in the Huainanzi), the five-colored stones symbolize the five Chinese elements (wood, fire, earth, metal, and water), the black dragon was the essence of water and thus cause of the floods, Ji Province serves metonymically for the central regions (the Sinitic world). Following this, the Huainanzi tells about how the sage-rulers Nüwa and Fuxi set order over the realm by following the Way () and its potency ().
The Classic of Mountains and Seas, dated between the Warring States period and the Han dynasty, describes Nüwa's intestines as being scattered into ten spirits.
In Liezi (c. 475 – 221 BC), Chapter 5 "Questions of Tang" (), author Lie Yukou describes Nüwa repairing the original imperfect heaven using five-colored stones, and cutting the legs off a tortoise to use as struts to hold up the sky.
In Songs of Chu (c. 340 – 278 BC), Chapter 3 "Asking Heaven" (), author Qu Yuan writes that Nüwa molded figures from the yellow earth, giving them life and the ability to bear children. After demons fought and broke the pillars of the heavens, Nüwa worked unceasingly to repair the damage, melting down the five-coloured stones to mend the heavens.
In Shuowen Jiezi (c. 58 – 147 AD), China's earliest dictionary, under the entry for Nüwa author Xu Shen describes her as being both the sister and the wife of Fuxi. Nüwa and Fuxi were pictured as having snake-like tails interlocked in an Eastern Han dynasty mural in the Wuliang Temple in Jiaxiang county, Shandong province.
In Duyi Zhi (; c. 846 – 874 AD), Volume 3, author Li Rong gives this description.
There are stories that have her as the "consort" of Fuxi rather than his sister.
In Yuchuan Ziji ( c. 618 – 907 AD), Chapter 3 (), author Lu Tong describes Nüwa as the wife of Fuxi.
In Siku Quanshu, Sima Zhen (679–732) provides commentary on the prologue chapter to Sima Qian's Shiji, "Supplemental to the Historic Record: History of the Three August Ones", wherein it is found that the Three August Ones are Nüwa, Fuxi, and Shennong; Fuxi and Nüwa have the same last name, Feng (; Hmong: Faj).
In the collection Four Great Books of Song (c. 960 – 1279 AD), compiled by Li Fang and others, Volume 78 of the book Imperial Readings of the Taiping Era contains a chapter "Customs by Yingshao of the Han Dynasty" in which it is stated that there were no men when the sky and the earth were separated. Thus Nüwa used yellow clay to make people. But the clay was not strong enough so she put ropes into the clay to make the bodies erect. It is also said that she prayed to gods to let her be the goddess of marital affairs. Variations of this story exist.
In Ming dynasty myths about the transition from the Shang dynasty to the Zhou dynasty, Nüwa made evil decisions that ultimately benefited China, such as sending a fox spirit to encourage the debauchery of King Zhou, which led to him being deposed. Other tales have her and Fuxi as exclusively the "great gentle protectors of humanity" unwilling to use subterfuge.
Nüwa and Fuxi were also thought to be gods of silk.
Iconography of Fuxi and Nüwa
The iconography of Fuxi and Nüwa vary in physical appearance depending on the time period and also shows regional differences. In Chinese tomb murals and iconography, Fuxi and Nüwa generally have snake-like bodies and human face or head.
Nüwa is often depicted holding a compass or multiple compasses, which were a traditional Chinese symbol of a dome-like sky. She was also thought to be an embodiment of the stars and the sky or a star god.
Fuxi and Nüwa can be depicted as individual figures arranged as a symmetrical pair or they can be depicted in double figures with intertwined snake-like bodies. Their snake-like tails can also be depicted stretching out towards each other. This is similar to the representation of Rahu and Ketu in Indian astrology.
Fuxi and Nüwa can also appear individually on separate tomb bricks. They generally hold or embrace the sun or moon discs containing the images of a bird (or a three-legged crow) or a toad (sometimes a hare) which are the sun and moon symbolism respectively, and/or each holding a try square or a pair of compasses, or holding a longevity mushroom () plant. Fuxi and Nüwa holding the sun and the moon appears as early as the late Western Han dynasty. Other physical appearance variation, such as lower snake-like body shape (e.g. thick vs thin tails), depictions of legs (i.e. legs found along the snake-like body) and wings (e.g. wings with feathers which protrude from their backs as found in late Western Han Xinan (新安) Tomb or smaller quills found on their shoulders), and in hats and hairstyles, also exist.
In the Luoyang regions murals dating to the late Western Han dynasty, Fuxi and Nüwa are generally depicted as individual figures, each one found at each side of the central ridge of tomb chambers as found in the Bu Qianqiu Tomb. They can also be found without intertwining tails from the stone murals of the same period. Since the middle of the Eastern Han dynasty, their tails started to intertwine.
In the Gansu murals dating to the Wei and Western Jin period, one of the most typical features of Fuxi is the "mountain-hat" () which looks like a three-peaked cap while Nüwa is depicted wearing various hairstyles characteristic of Han women. Both deities dressed in wide-sleeved clothing, which reflects typical Han clothing style also commonly depicted in Han dynasty art.
Legends
Appearance in Fengshen Yanyi
Nüwa is featured within the famed Ming dynasty novel Fengshen Bang. As featured within this novel, Nüwa is revered since Xia dynasty for creating the five-colored stones to mend the heavens, which tilted after Gonggong toppled one of the heavenly pillars, Mount Buzhou. Shang Rong asked King Zhou of Shang to pay her a visit as a sign of deep respect. Upon seeing her statue, Zhou was completely overcome with lust at the sight of the beautiful ancient goddess Nüwa. He wrote an erotic poem on a neighboring wall and took his leave. When Nüwa later returned to her temple after visiting the Yellow Emperor, she saw the foulness of Zhou's words. In her anger, she swore that the Shang dynasty would end in payment for his offense. In her rage, Nüwa personally ascended to the palace in an attempt to kill the king, but was suddenly struck back by two large beams of red light.
After Nüwa realized that King Zhou was already destined to rule the kingdom for twenty-six more years, Nüwa summoned her three subordinates—the Thousand-Year Vixen (later becoming Daji), the Jade Pipa, and the Nine-Headed Pheasant. With these words, Nüwa brought destined chaos to the Shang dynasty, "The luck Cheng Tang won six hundred years ago is dimming. I speak to you of a new mandate of heaven which sets the destiny for all. You three are to enter King Zhou's palace, where you are to bewitch him. Whatever you do, do not harm anyone else. If you do my bidding, and do it well, you will be permitted to reincarnate as human beings." With these words, Nüwa was never heard of again, but was still a major indirect factor towards the Shang dynasty's fall.
Creation of humanity
Pangu was said to be the creation god in Chinese mythology. He was a giant sleeping within an egg of chaos. As he awoke, he stood up and divided the sky and the earth. Pangu then died after standing up, and his body turned into rivers, mountains, plants, animals, and everything else in the world, among which is a powerful being known as Huaxu (華胥). Huaxu gave birth to a twin brother and sister, Fuxi and Nüwa. Fuxi and Nüwa are said to be creatures that have faces of human and bodies of snakes.
Nüwa created humanity due to her loneliness, which grew more intense over time. She molded yellow earth or, in other versions, yellow clay into the shape of people. These individuals later became the wealthy nobles of society, because they had been created by Nüwa's own hands. However, the majority of humanity was created when Nüwa dragged string across mud to mass-produce them, which she did because creating every person by hand was too time- and energy-consuming. This creation story gives an aetiological explanation for the social hierarchy in ancient China. The nobility believed that they were more important than the mass-produced majority of humanity, because Nüwa took time to create them, and they had been directly touched by her hand. In another version of the creation of humanity, Nüwa and Fuxi were survivors of a great flood. By the command of the God of the heaven, they were married and Nüwa had a child which was a ball of meat. This ball of meat was cut into small pieces, and the pieces were scattered across the world, which then became humans.
Nüwa was born three months after her brother, Fuxi, whom she later took as her husband; this marriage is the reason why Nüwa is credited with inventing the idea of marriage.
Before the two of them got married, they lived on mount K'un-lun. A prayer was made after the two became guilty of falling for each other. The prayer is as follows,
"Oh Heaven, if Thou wouldst send us forth as man and wife, then make all the misty vapor gather. If not, then make all the misty vapor disperse."
Misty vapor then gathered after the prayer signifying the two could marry. When intimate, the two made a fan out of grass to screen their faces which is why during modern day marriages, the couple hold a fan together. By connecting, the two were representative of Yin and Yang; Fuxi being connected to Yang and masculinity along Nüwa being connected to Yin and femininity. This is further defined with Fuxi receiving a carpenter's square which symbolizes his identification with the physical world because a carpenter's square is associated with straight lines and squares leading to a more straightforward mindset. Meanwhile, Nüwa was given a compass to symbolize her identification with the heavens because a compass is associated with curves and circles leading to a more abstract mindset. With the two being married, it symbolized the union between heaven and Earth. Other versions have Nüwa invent the compass rather than receive it as a gift. In addition, the system of male and female sex, the yang-yin philosophy, is expressed here in a complex way: first as Fuxi and Nüwa, then as a compass (masculine) and a square (feminine), and thirdly, as Nüwa (woman) with a compass (man) and Fuxi (man) with a square (woman).
Nüwa Mends the Heavens
Nüwa Mends the Heavens () is a well-known theme in Chinese culture. The courage and wisdom of Nüwa inspired the ancient Chinese to control nature's elements and has become a favorite subject of Chinese poets, painters, and sculptors, along with so many poetry and arts like novels, films, paintings, and sculptures; e.g. the sculptures that decorate Nanshan and Ya'an.
The Huainanzi tells an ancient story about how the four pillars that support the sky crumbled inexplicably. Other sources have tried to explain the cause, i.e. the battle between Gong Gong and Zhuanxu or Zhurong. Unable to accept his defeat, Gong Gong deliberately banged his head onto Mount Buzhou (不周山) which was one of the four pillars. Half of the sky fell which created a gaping hole and the Earth itself was cracked; the Earth's axis mundi was tilted into the southeast while the sky rose into the northwest. This is said to be the reason why the western region of China is higher than the eastern and that most of its rivers flow towards the southeast. This same explanation is applied to the Sun, Moon, and stars which moved into the northwest. A wildfire burnt the forests and led the wild animals to run amok and attack the innocent peoples, while the water which was coming out from the earth's crack didn't seem to be slowing down.
Nüwa pitied the humans she had made and attempted to repair the sky. She gathered five colored-stones (red, yellow, blue, black, and white) from the riverbed, melted them and used them to patch up the sky: since then the sky (clouds) have been colorful. She then killed a giant turtle (or tortoise), some version named the tortoise as Ao, cut off the four legs of the creature to use as new pillars to support the sky. But Nüwa didn't do it perfectly because the unequal length of the legs made the sky tilt. After the job was done, Nüwa drove away the wild animals, extinguished the fire, and controlled the flood with a huge amount of ashes from the burning reeds and the world became as peaceful as it was before.
Empress Nüwa
Many Chinese know well their Three Sovereigns and Five Emperors, i.e. the early leaders of humanity as well as culture heroes according to the Northern Chinese belief. But the lists vary and depend on the sources used. One version includes Nüwa as one of the Three Sovereigns, who reigned after Fuxi and before Shennong.
The myth of the Three Sovereigns sees the three as demigod figures, and the myth is used to stress the importance of an imperial reign. The variation between sources stems from China being generally divided before the Qin and Han dynasties, and the version with Fuxi, Shennong, and Nüwa was used to emphasize rule and structure.
In her matriarchal reign, she battled against a neighboring tribal chief, defeated him, and took him to the peak of a mountain. Defeated by a woman, the chief felt ashamed to be alive and banged his head on the heavenly bamboo to kill himself and for revenge. His act tore a hole in the sky and made a flood hit the whole world. The flood killed all people except Nüwa and her army which was protected by her divinity. After that, Nüwa patched the sky with five colored-stones until the flood receded.
Popular culture
The Ming dynasty fantasy novel Investiture of the Gods (1567) has Nüwa being an instigator of the Shang dynasty's collapse, as she sent the fox demon Daji to corrupt King Zhou for the latter verbally desecrating her statue at a temple.
The Qing dynasty novel Dream of the Red Chamber (1754) narrates how Nüwa gathered 36,501 stones to patch the sky but left one unused. The unused stone plays an important role in the novel's storyline.
A goddess Nüwa statue named Sky Patching by Yuan Xikun was exhibited at Times Square, New York City, on 19 April 2012 to celebrate Earth Day (2012), symbolized the importance of protecting the ozone layer. Previously, this 3.9 meter tall statue was exhibited on Beijing and now is placed on Vienna International Centre, Vienna since 21 November 2012.
The story of Nüwa patching the sky was being retold by Carol Chen in her book Goddess Nuwa Patches Up the Sky (2014) which was illustrated by Meng Xianlong.
In Shin Megami Tensei 5, Nuwa (voiced by Ayana Taketatsu) is the partner to Shohei Yakumo (voiced by Tomokazu Sugita) as two of the main characters who aid the protagonist.
In the Gremlins animated series, Nuwa (voiced by Sandra Oh) is portrayed as the creator of the Mogwai species that Gizmo originated from and fell into a depression when the humans could not properly coexist with them.
See also
Flood Mythology of China
Explanatory notes
Citations
General bibliography
.
.
.
Further reading
External links
Three Sovereigns and Five Emperors
Investiture of the Gods characters
Journey to the West characters
Dream of the Red Chamber characters
Arts goddesses
Bodhisattvas
Buddhist goddesses
Deities in Chinese folk religion
Chinese goddesses
Creation myths
Creator goddesses
Marriage goddesses
Mother goddesses
Mythological queens
Snake goddesses
Sky supporters
Taoist deities
Legendary progenitors
Heroes in mythology and legend | Nüwa | [
"Astronomy"
] | 4,098 | [
"Cosmogony",
"Creation myths"
] |
313,009 | https://en.wikipedia.org/wiki/Drug%20overdose | A drug overdose (overdose or OD) is the ingestion or application of a drug or other substance in quantities much greater than are recommended. Typically the term is applied for cases when a risk to health is a potential result. An overdose may result in a toxic state or death.
Classification
The word "overdose" implies that there is a common safe dosage and usage for the drug; therefore, the term is commonly applied only to drugs, not poisons, even though many poisons as well are harmless at a low enough dosage. Drug overdose is sometimes used as a means to commit suicide, as the result of intentional or unintentional misuse of medication. Intentional misuse leading to overdose can include using prescribed or non-prescribed drugs in excessive quantities in an attempt to produce euphoria.
Usage of illicit drugs, in large quantities, or after a period of drug abstinence can also induce overdose. Cocaine and opioid users who inject intravenously can easily overdose accidentally, as the margin between a pleasurable drug sensation and an overdose is small. Unintentional misuse can include errors in dosage caused by failure to read or understand product labels. Accidental overdoses may also be the result of over-prescription, failure to recognize a drug's active ingredient or unwitting ingestion by children. A common unintentional overdose in young children involves multivitamins containing iron.
The term 'overdose' is often misused as a descriptor for adverse drug reactions or negative drug interactions due to mixing multiple drugs simultaneously.
Signs and symptoms
Signs and symptoms of an overdose vary depending on the drug or exposure to toxins. The symptoms can often be divided into differing toxidromes. This can help one determine what class of drug or toxin is causing the difficulties.
Symptoms of opioid overdoses include slow breathing, heart rate and pulse. Opioid overdoses can also cause pinpoint pupils, and blue lips and nails due to low levels of oxygen in the blood. A person experiencing an opioid overdose might also have muscle spasms, seizures and decreased consciousness. A person experiencing an opiate overdose usually will not wake up, even if their name is called or they are shaken vigorously.
Causes
The drugs or toxins that are most frequently involved in overdose and death (grouped by ICD-10):
Acute alcohol intoxication (F10)
Ethyl alcohol (alcohol)
Methanol poisoning
Ethylene glycol poisoning
Opioid overdose (F11)
Among sedative-hypnotics (F13)
Barbiturate overdose (T42.3)
Benzodiazepine overdose (T42.4)
Uncategorized sedative-hypnotics (T42.6)
Ethchlorvynol (Placidyl)
GHB
Glutethimide (Doriden)
Methaqualone
Ketamine (T41.2)
Among stimulants (F14-F15)
Cocaine overdose (T40.5)
Amphetamine overdose (T43.6)
Methamphetamine overdose (T43.6)
Among tobacco (F17)
Nicotine poisoning (T65.2)
Among poly drug use (F19)
Drug "cocktails" (speedballs)
Medications
Aspirin poisoning (T39.0)
Paracetamol poisoning (Alone or mixed with oxycodone)
Paracetamol toxicity (T39.1)
Tricyclic antidepressant overdose (T43.0)
Vitamin poisoning
Pesticide poisoning (T60)
Organophosphate poisoning
DDT
Inhalants
Lithium toxicity
Added flavoring
Masking undesired taste may impair judgement of the potency, which is a factor in overdosing. For example, lean is usually created as a drinkable mixture, the cough syrup is combined with soft drinks, especially fruit-flavored drinks such as Sprite, Mountain Dew or Fanta, and is typically served in a foam cup. A hard candy, usually a Jolly Rancher, may be added to give the mixture a sweeter flavor.
Diagnosis
The substance that has been taken may often be determined by asking the person. However, if they will not, or cannot, due to an altered level of consciousness, provide this information, a search of the home or questioning of friends and family may be helpful.
Examination for toxidromes, drug testing, or laboratory test may be helpful. Other laboratory test such as glucose, urea and electrolytes, paracetamol levels and salicylate levels are typically done. Negative drug-drug interactions have sometimes been misdiagnosed as an acute drug overdose, occasionally leading to the assumption of suicide.
Prevention
The distribution of naloxone to injection drug users and other opioid drug users decreases the risk of death from overdose. The Centers for Disease Control and Prevention (CDC) estimates that U.S. programs for drug users and their caregivers prescribing take-home doses of naloxone and training on its utilization are estimated to have prevented 10,000 opioid overdose deaths. Healthcare institution-based naloxone prescription programs have also helped reduce rates of opioid overdose in the U.S. state of North Carolina, and have been replicated in the U.S. military. Nevertheless, scale-up of healthcare-based opioid overdose interventions is limited by providers' insufficient knowledge and negative attitudes towards prescribing take-home naloxone to prevent opioid overdose. Programs training police and fire personnel in opioid overdose response using naloxone have also shown promise in the U.S.
Supervised injection sites (also known as overdose prevention centers) have been used to help prevent drug overdoses by offering opioid reversal medications such as naloxone, medical assistance and treatment options. They also provide clean needles to help prevent the spread of diseases like HIV/AIDS and hepatitis.
Management
Stabilization of the person's airway, breathing, and circulation (ABCs) is the initial treatment of an overdose. Ventilation is considered when there is a low respiratory rate or when blood gases show the person to be hypoxic. Monitoring of the patient should continue before and throughout the treatment process, with particular attention to temperature, pulse, respiratory rate, blood pressure, urine output, electrocardiography (ECG) and O2 saturation. Poison control centers and medical toxicologists are available in many areas to provide guidance in overdoses both to physicians and to the general public.
Antidotes
Specific antidotes are available for certain overdoses. For example, naloxone is the antidote for opiates such as heroin or morphine. Similarly, benzodiazepine overdoses may be effectively reversed with flumazenil. As a nonspecific antidote, activated charcoal is frequently recommended if available within one hour of the ingestion and the ingestion is significant. Gastric lavage, syrup of ipecac, and whole bowel irrigation are rarely used.
Epidemiology and statistics
The UN gives a figure of 300,000 deaths per year in the world through drug overdose.
1,015,060 US residents died from drug overdoses from 1968 to 2019. 22 people out of every 100,000 died from drug overdoses in 2019 in the US. From 1999 to Feb 2019 in the United States, more than 770,000 people have died from drug overdoses.
In the US around 107,500 people died in the 12-month period ending August 31, 2022, at a rate of 294 deaths per day. 70,630 people died from drug overdoses in 2019. The U.S. drug overdose death rate has gone from 2.5 per 100,000 people in 1968 to 21.5 per 100,000 in 2019.
The National Center for Health Statistics reports that 19,250 people died of accidental poisoning in the U.S. in the year 2004 (eight deaths per 100,000 population).
In 2008 testimony before a Senate subcommittee, Leonard J. Paulozzi, a medical epidemiologist at the Centers for Disease Control and Prevention said that in 2005 more than 22,000 American people died due to overdoses, and the number is growing rapidly. Paulozzi also testified that all available evidence suggests unintentional overdose deaths are related to the increasing use of prescription drugs, especially opioid painkillers. However, the vast majority of overdoses are also attributable to alcohol. It is very rare for a victim of an overdose to have consumed just one drug. Most overdoses occur when drugs are ingested in combination with alcohol.
Drug overdose was the leading cause of injury death in 2013. Among people 25 to 64 years old, drug overdose caused more deaths than motor vehicle traffic crashes. There were 43,982 drug overdose deaths in the United States in 2013. Of these, 22,767 (51.8%) were related to prescription drugs.
The 22,767 deaths relating to prescription drug overdose in 2013, 16,235 (71.3%) involved opioid painkillers, and 6,973 (30.6%) involved benzodiazepines. Drug misuse and abuse caused about 2.5 million emergency department (ED) visits in 2011. Of these, more than 1.4 million ED visits were related to prescription drugs. Among those ED visits, 501,207 visits were related to anti-anxiety and insomnia medications, and 420,040 visits were related to opioid analgesics.
New CDC data in 2024 demonstrates U.S. drug overdose deaths have significantly declined, marking the potential for the first year with fewer than 100,000 fatalities since 2020. The CDC data shows a nearly 17% drop in reported overdose deaths during the 12 months ending in June, totaling 93,087. This is a notable decrease from the 111,615 deaths recorded in the same period ending in June 2023. While the opioid crisis continues to take a heavy toll, fentanyl remains a major driver, contributing to the majority of these fatalities.
See also
References
Further reading
External links
Causes of death
Medical emergencies
Drug culture
Suicide by poison
Substance-related disorders
de:Überdosis
Smoking | Drug overdose | [
"Environmental_science"
] | 2,120 | [
" medicaments and biological substances",
"Toxicology",
"Poisoning by drugs"
] |
313,267 | https://en.wikipedia.org/wiki/Neutron%20diffraction | Neutron diffraction or elastic neutron scattering is the application of neutron scattering to the determination of the atomic and/or magnetic structure of a material. A sample to be examined is placed in a beam of thermal or cold neutrons to obtain a diffraction pattern that provides information of the structure of the material. The technique is similar to X-ray diffraction but due to their different scattering properties, neutrons and X-rays provide complementary information: X-Rays are suited for superficial analysis, strong x-rays from synchrotron radiation are suited for shallow depths or thin specimens, while neutrons having high penetration depth are suited for bulk samples.
Instrumental and sample requirements
The technique requires a source of neutrons. Neutrons are usually produced in a nuclear reactor or spallation source. At a research reactor, other components are needed, including a crystal monochromator (in the case of thermal neutrons), as well as filters to select the desired neutron wavelength. Some parts of the setup may also be movable. For the long-wavelength neutrons, crystals cannot be used and gratings are used instead as diffractive optical components. At a spallation source, the time of flight technique is used to sort the energies of the incident neutrons (higher energy neutrons are faster), so no monochromator is needed, but rather a series of aperture elements synchronized to filter neutron pulses with the desired wavelength.
The technique is most commonly performed as powder diffraction, which only requires a polycrystalline powder. Single crystal work is also possible, but the crystals must be much larger than those that are used in single-crystal X-ray crystallography. It is common to use crystals that are about 1 mm3.
The technique also requires a device that can detect the neutrons after they have been scattered.
Summarizing, the main disadvantage to neutron diffraction is the requirement for a nuclear reactor. For single crystal work, the technique requires relatively large crystals, which are usually challenging to grow. The advantages to the technique are many - sensitivity to light atoms, ability to distinguish isotopes, absence of radiation damage, as well as a penetration depth of several cm
Nuclear scattering
Like all quantum particles, neutrons can exhibit wave phenomena typically associated with light or sound. Diffraction is one of these phenomena; it occurs when waves encounter obstacles whose size is comparable with the wavelength. If the wavelength of a quantum particle is short enough, atoms or their nuclei can serve as diffraction obstacles. When a beam of neutrons emanating from a reactor is slowed and selected properly by their speed, their wavelength lies near one angstrom (0.1 nanometer), the typical separation between atoms in a solid material. Such a beam can then be used to perform a diffraction experiment. Impinging on a crystalline sample, it will scatter under a limited number of well-defined angles, according to the same Bragg's law that describes X-ray diffraction.
Neutrons and X-rays interact with matter differently. X-rays interact primarily with the electron cloud surrounding each atom. The contribution to the diffracted x-ray intensity is therefore larger for atoms with larger atomic number (Z). On the other hand, neutrons interact directly with the nucleus of the atom, and the contribution to the diffracted intensity depends on each isotope; for example, regular hydrogen and deuterium contribute differently. It is also often the case that light (low Z) atoms contribute strongly to the diffracted intensity, even in the presence of large Z atoms. The scattering length varies from isotope to isotope rather than linearly with the atomic number. An element like vanadium strongly scatters X-rays, but its nuclei hardly scatters neutrons, which is why it is often used as a container material. Non-magnetic neutron diffraction is directly sensitive to the positions of the nuclei of the atoms.
The nuclei of atoms, from which neutrons scatter, are tiny. Furthermore, there is no need for an atomic form factor to describe the shape of the electron cloud of the atom and the scattering power of an atom does not fall off with the scattering angle as it does for X-rays. Diffractograms therefore can show strong, well-defined diffraction peaks even at high angles, particularly if the experiment is done at low temperatures. Many neutron sources are equipped with liquid helium cooling systems that allow data collection at temperatures down to 4.2 K. The superb high angle (i.e. high resolution) information means that the atomic positions in the structure can be determined with high precision. On the other hand, Fourier maps (and to a lesser extent difference Fourier maps) derived from neutron data suffer from series termination errors, sometimes so much that the results are meaningless.
Magnetic scattering
Although neutrons are uncharged, they carry a magnetic moment, and therefore interact with magnetic moments, including those arising from the electron cloud around an atom. Neutron diffraction can therefore reveal the microscopic magnetic structure of a material.
Magnetic scattering does require an atomic form factor as it is caused by the much larger electron cloud around the tiny nucleus. The intensity of the magnetic contribution to the diffraction peaks will therefore decrease towards higher angles.
Uses
Neutron diffraction can be used to determine the static structure factor of gases, liquids or amorphous solids. Most experiments, however, aim at the structure of crystalline solids, making neutron diffraction an important tool of crystallography.
Neutron diffraction is closely related to X-ray powder diffraction. In fact, the single crystal version of the technique is less commonly used because currently available neutron sources require relatively large samples and large single crystals are hard or impossible to come by for most materials. Future developments, however, may well change this picture. Because the data is typically a 1D powder diffractogram they are usually processed using Rietveld refinement. In fact the latter found its origin in neutron diffraction (at Petten in the Netherlands) and was later extended for use in X-ray diffraction.
One practical application of elastic neutron scattering/diffraction is that the lattice constant of metals and other crystalline materials can be very accurately measured. Together with an accurately aligned micropositioner a map of the lattice constant through the metal can be derived. This can easily be converted to the stress field experienced by the material. This has been used to analyse stresses in aerospace and automotive components to give just two examples. The high penetration depth permits measuring residual stresses in bulk components as crankshafts, pistons, rails, gears. This technique has led to the development of dedicated stress diffractometers, such as the ENGIN-X instrument at the ISIS neutron source.
Neutron diffraction can also be employed to give insight into the 3D structure any material that diffracts.
Another use is for the determination of the solvation number of ion pairs in electrolytes solutions.
The magnetic scattering effect has been used since the establishment of the neutron diffraction technique to quantify magnetic moments in materials, and study the magnetic dipole orientation and structure. One of the earliest applications of neutron diffraction was in the study of magnetic dipole orientations in antiferromagnetic transition metal oxides such as manganese, iron, nickel, and cobalt oxides. These experiments, first performed by Clifford Shull, were the first to show the existence of the antiferromagnetic arrangement of magnetic dipoles in a material structure. Now, neutron diffraction continues to be used to characterize newly developed magnetic materials.
Hydrogen, null-scattering and contrast variation
Neutron diffraction can be used to establish the structure of low atomic number materials like proteins and surfactants much more easily with lower flux than at a synchrotron radiation source. This is because some low atomic number materials have a higher cross section for neutron interaction than higher atomic weight materials.
One major advantage of neutron diffraction over X-ray diffraction is that the latter is rather insensitive to the presence of hydrogen (H) in a structure, whereas the nuclei 1H and 2H (i.e. Deuterium, D) are strong scatterers for neutrons. The greater scattering power of protons and deuterons means that the position of hydrogen in a crystal and its thermal motions can be determined with greater precision by neutron diffraction. The structures of metal hydride complexes, e.g., Mg2FeH6 have been assessed by neutron diffraction.
The neutron scattering lengths bH = −3.7406(11) fm and bD = 6.671(4) fm, for H and D respectively, have opposite sign, which allows the technique to distinguish them. In fact there is a particular isotope ratio for which the contribution of the element would cancel, this is called null-scattering.
It is undesirable to work with the relatively high concentration of H in a sample. The scattering intensity by H-nuclei has a large inelastic component, which creates a large continuous background that is more or less independent of scattering angle. The elastic pattern typically consists of sharp Bragg reflections if the sample is crystalline. They tend to drown in the inelastic background. This is even more serious when the technique is used for the study of liquid structure. Nevertheless, by preparing samples with different isotope ratios, it is possible to vary the scattering contrast enough to highlight one element in an otherwise complicated structure. The variation of other elements is possible but usually rather expensive. Hydrogen is inexpensive and particularly interesting, because it plays an exceptionally large role in biochemical structures and is difficult to study structurally in other ways.
History
Neutron was discovered around early 1930s, and diffraction was first observed in 1936 by two groups, von Halban and Preiswerk and by Mitchell and Powers. In 1944, Ernest O. Wollan, with a background in X-ray scattering from his PhD work under Arthur Compton, recognized the potential for applying thermal neutrons from the newly operational X-10 nuclear reactor to crystallography. Joined by Clifford G. Shull they developed neutron diffraction throughout the 1940s.
The first neutron diffraction experiments were carried out in 1945 by Ernest O. Wollan using the Graphite Reactor at Oak Ridge. He was joined shortly thereafter (June 1946) by Clifford Shull, and together they established the basic principles of the technique, and applied it successfully to many different materials, addressing problems like the structure of ice and the microscopic arrangements of magnetic moments in materials. For this achievement, Shull was awarded one half of the 1994 Nobel Prize in Physics. (Wollan died in 1984). (The other half of the 1994 Nobel Prize for Physics went to Bert Brockhouse for development of the inelastic scattering technique at the Chalk River facility of AECL. This also involved the invention of the triple axis spectrometer). The delay between the achieved work (1946) and the Nobel Prize awarded to Brockhouse and Shull (1994) brings them close to the delay between the invention by Ernst Ruska of the electron microscope (1933) - also in the field of particle optics - and his own Nobel prize (1986). This in turn is near to the record of 55 years between the discoveries of Peyton Rous and his award of the Nobel Prize in 1966.
See also
Crystallography
Crystallographic database
Electron diffraction
Grazing incidence diffraction
Inelastic neutron scattering
X-ray diffraction computed tomography
References
Further reading
External links
National Institute of Standards and Technology Center for Neutron Research
From Bragg’s law to neutron diffraction
Integrated Infrastructure Initiative for Neutron Scattering and Muon Spectroscopy (NMI3) - a European consortium of 18 partner organisations from 12 countries, including all major facilities in the fields of neutron scattering and muon spectroscopy
Frank Laboratory of Neutron Physics of Joint Institute for Nuclear Research (JINR)
IAEA neutron beam instrument database
Diffraction
Neutron scattering | Neutron diffraction | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,458 | [
"Spectrum (physical sciences)",
"Neutron scattering",
"Crystallography",
"Diffraction",
"Scattering",
"Spectroscopy"
] |
313,303 | https://en.wikipedia.org/wiki/Star%20polygon | In geometry, a star polygon is a type of non-convex polygon. Regular star polygons have been studied in depth; while star polygons in general appear not to have been formally defined, certain notable ones can arise through truncation operations on regular simple or star polygons.
Branko Grünbaum identified two primary usages of this terminology by Johannes Kepler, one corresponding to the regular star polygons with intersecting edges that do not generate new vertices, and the other one to the isotoxal concave simple polygons.
Polygrams include polygons like the pentagram, but also compound figures like the hexagram.
One definition of a star polygon, used in turtle graphics, is a polygon having q ≥ 2 turns (q is called the turning number or density), like in spirolaterals.
Names
Star polygon names combine a numeral prefix, such as penta-, with the Greek suffix -gram (in this case generating the word pentagram). The prefix is normally a Greek cardinal, but synonyms using other prefixes exist. For example, a nine-pointed polygon or enneagram is also known as a nonagram, using the ordinal nona from Latin. The -gram suffix derives from γραμμή (grammḗ), meaning a line. The name star polygon reflects the resemblance of these shapes to the diffraction spikes of real stars.
Regular star polygon
A regular star polygon is a self-intersecting, equilateral, and equiangular polygon.
A regular star polygon is denoted by its Schläfli symbol {p/q}, where p (the number of vertices) and q (the density) are relatively prime (they share no factors) and where q ≥ 2. The density of a polygon can also be called its turning number: the sum of the turn angles of all the vertices, divided by 360°.
The symmetry group of {p/q} is the dihedral group Dp, of order 2p, independent of q.
Regular star polygons were first studied systematically by Thomas Bradwardine, and later Johannes Kepler.
Construction via vertex connection
Regular star polygons can be created by connecting one vertex of a regular p-sided simple polygon to another vertex, non-adjacent to the first one, and continuing the process until the original vertex is reached again. Alternatively, for integers p and q, it can be considered as being constructed by connecting every qth point out of p points regularly spaced in a circular placement. For instance, in a regular pentagon, a five-pointed star can be obtained by drawing a line from the 1st to the 3rd vertex, from the 3rd to the 5th vertex, from the 5th to the 2nd vertex, from the 2nd to the 4th vertex, and from the 4th to the 1st vertex.
If q ≥ p/2, then the construction of {p/q} will result in the same polygon as {p/(p − q)}; connecting every third vertex of the pentagon will yield an identical result to that of connecting every second vertex. However, the vertices will be reached in the opposite direction, which makes a difference when retrograde polygons are incorporated in higher-dimensional polytopes. For example, an antiprism formed from a prograde pentagram {5/2} results in a pentagrammic antiprism; the analogous construction from a retrograde "crossed pentagram" {5/3} results in a pentagrammic crossed-antiprism. Another example is the tetrahemihexahedron, which can be seen as a "crossed triangle" {3/2} cuploid.
Degenerate regular star polygons
If p and q are not coprime, a degenerate polygon will result with coinciding vertices and edges. For example, {6/2} will appear as a triangle, but can be labeled with two sets of vertices: 1-3 and 4-6. This should be seen not as two overlapping triangles, but as a double-winding single unicursal hexagon.
Construction via stellation
Alternatively, a regular star polygon can also be obtained as a sequence of stellations of a convex regular core polygon. Constructions based on stellation also allow regular polygonal compounds to be obtained in cases where the density q and amount p of vertices are not coprime. When constructing star polygons from stellation, however, if q > p/2, the lines will instead diverge infinitely, and if q = p/2, the lines will be parallel, with both resulting in no further intersection in Euclidean space. However, it may be possible to construct some such polygons in spherical space, similarly to the monogon and digon; such polygons do not yet appear to have been studied in detail.
Isotoxal star simple polygons
When the intersecting line segments are removed from a regular star n-gon, the resulting figure is no longer regular, but can be seen as an isotoxal concave simple 2n-gon, alternating vertices at two different radii. Branko Grünbaum, in Tilings and patterns, represents such a star that matches the outline of a regular polygram {n/d} as |n/d|, or more generally with {n𝛼}, which denotes an isotoxal concave or convex simple 2n-gon with outer internal angle 𝛼.
For |n/d|, the outer internal angle degrees, necessarily, and the inner (new) vertices have an external angle degrees, necessarily.
For {n𝛼}, the outer internal and inner external angles, also denoted by 𝛼 and β, do not have to match those of any regular polygram {n/d}; however, degrees and necessarily (here, {n𝛼} is concave).
Examples in tilings
These polygons are often seen in tiling patterns. The parametric angle 𝛼 (in degrees or radians) can be chosen to match internal angles of neighboring polygons in a tessellation pattern. In his 1619 work Harmonices Mundi, among periodic tilings, Johannes Kepler includes nonperiodic tilings, like that with three regular pentagons and one regular star pentagon fitting around certain vertices, 5.5.5.5/2, and related to modern Penrose tilings.
Interiors
The interior of a star polygon may be treated in different ways. Three such treatments are illustrated for a pentagram. Branko Grünbaum and Geoffrey Shephard consider two of them, as regular star n-gons and as isotoxal concave simple 2n-gons.
These three treatments are:
Where a line segment occurs, one side is treated as outside and the other as inside. This is shown in the left hand illustration and commonly occurs in computer vector graphics rendering.
The number of times that the polygonal curve winds around a given region determines its density. The exterior is given a density of 0, and any region of density > 0 is treated as internal. This is shown in the central illustration and commonly occurs in the mathematical treatment of polyhedra. (However, for non-orientable polyhedra, density can only be considered modulo 2 and hence, in those cases, for consistency, the first treatment is sometimes used instead.)
Wherever a line segment may be drawn between two sides, the region in which the line segment lies is treated as inside the figure. This is shown in the right hand illustration and commonly occurs when making a physical model.
When the area of the polygon is calculated, each of these approaches yields a different result.
In art and culture
Star polygons feature prominently in art and culture. Such polygons may or may not be regular, but they are always highly symmetrical. Examples include:
The {5/2} star pentagon (pentagram) is also known as a pentalpha or pentangle, and historically has been considered by many magical and religious cults to have occult significance.
The {7/2} and {7/3} star polygons (heptagrams) also have occult significance, particularly in the Kabbalah and in Wicca.
The {8/3} star polygon (octagram) is a frequent geometrical motif in Mughal Islamic art and architecture; the first is on the emblem of Azerbaijan.
An eleven pointed star called the hendecagram was used on the tomb of Shah Nematollah Vali.
See also
List of regular polytopes and compounds#Stars
Five-pointed star
Magic star
Moravian star
Pentagramma mirificum
Regular star 4-polytope
Rub el Hizb
Star (glyph)
Star polyhedron, Kepler–Poinsot polyhedron, and uniform star polyhedron
Starfish
References
Cromwell, P.; Polyhedra, CUP, Hbk. 1997, . Pbk. 1999, . p. 175
Grünbaum, B. and G. C. Shephard; Tilings and Patterns, New York: W. H. Freeman & Co. (1987), .
Grünbaum, B.; Polyhedra with Hollow Faces, Proc of NATO-ASI Conference on Polytopes ... etc. (Toronto, 1993), ed. T. Bisztriczky et al., Kluwer Academic (1994), pp. 43–70.
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things, 2008, (Chapter 26, p. 404: Regular star-polytopes Dimension 2)
Branko Grünbaum, Metamorphoses of polygons, published in The Lighter Side of Mathematics: Proceedings of the Eugène Strens Memorial Conference on Recreational Mathematics and its History (1994)
External Links
Star symbols | Star polygon | [
"Mathematics"
] | 2,055 | [
"Symbols",
"Star symbols"
] |
313,371 | https://en.wikipedia.org/wiki/Rip%20current | A rip current (or just rip) is a specific type of water current that can occur near beaches where waves break. A rip is a strong, localized, and narrow current of water that moves directly away from the shore by cutting through the lines of breaking waves, like a river flowing out to sea. The force of the current in a rip is strongest and fastest next to the surface of the water.
Rip currents can be hazardous to people in the water. Swimmers who are caught in a rip current and who do not understand what is happening, or who may not have the necessary water skills, may panic, or they may exhaust themselves by trying to swim directly against the flow of water. Because of these factors, rip currents are the leading cause of rescues by lifeguards at beaches. In the United States they cause an average of 71 deaths by drowning per year .
A rip current is not the same thing as undertow, although some people use that term incorrectly when they are talking about a rip current. Contrary to popular belief, neither rip nor undertow can pull a person down and hold them under the water. A rip simply carries floating objects, including people, out to just beyond the zone of the breaking waves, at which point the current dissipates and releases everything it is carrying.
Causes and occurrence
A rip current forms because wind and breaking waves push surface water towards the land. This causes a slight rise in the water level along the shore. This excess water will tend to flow back to the open water via the route of least resistance. When there is a local area which is slightly deeper, such as a break in an offshore sand bar or reef, this can allow water to flow offshore more easily, and this will initiate a rip current through that gap.
Water that has been pushed up near the beach flows along the shore towards the outgoing rip as "feeder currents". The excess water flows out at a right angle to the beach, in a tight current called the "neck" of the rip. The "neck" is where the flow is most rapid. When the water in the rip current reaches outside of the lines of breaking waves, the flow disperses sideways, loses power, and dissipates in what is known as the "head" of the rip.
Rip currents can form by the coasts of oceans, seas, and large lakes, whenever there are waves of sufficient energy. Rip currents often occur on a gradually shelving shore, where breaking waves approach the shore parallel to it, or where underwater topography encourages outflow at one specific area. Baïnes are one of the patterns identified to be producing rip currents. The location of rip currents can be difficult to predict. Some tend to recur always in the same places, but others can appear and disappear suddenly at various locations along the beach. The appearance and disappearance of rip currents is dependent upon the bottom topography and the direction from which the surf and swells are coming.
Rip currents occur wherever there is strong longshore variability in wave breaking. This variability may be caused by such features as sandbars, by piers and jetties, and even by crossing wave trains. They are often located in places where there is a gap in a reef, or low area on a sandbar. Rip currents, once they have formed, may deepen the channel through a sandbar.
Rip currents are usually quite narrow, but they tend to be more common, wider, and faster, when and where breaking waves are large and powerful. Local underwater topography makes some beaches more likely to have rip currents. A few beaches are notorious in this respect.
Although rip tide is a misnomer, in areas of significant tidal range, rip currents may only occur at certain stages of the tide, when the water is shallow enough to cause the waves to break over a sand bar, but deep enough for the broken wave to flow over the bar. In parts of the world with a big difference between high tide and low tide, and where the shoreline shelves gently, the distance between a bar and the shoreline may vary from a few meters to a kilometer or more, depending whether it is high tide or low tide.
A fairly common misconception is that rip currents can pull a swimmer down, under the surface of the water. This is not true, and in reality a rip current is strongest close to the surface, as the flow near the bottom is slowed by friction.
The surface of a rip current can often appear to be a relatively smooth area of water, without any breaking waves, and this deceptive appearance may cause some beach-goers to believe that it is a suitable place to enter the water.
Technical description
A more detailed and technical description of rip currents requires understanding the concept of radiation stress. Radiation stress is the force (or momentum flux) that is exerted on the water column by the presence of the wave. When a wave reaches shallow water and shoals, it increases in height prior to breaking. During this increase in height, radiation stress increases, because of the force exerted by the weight of the water that has been pushed upwards.
To balance this, the local mean surface level drops. This is known as the setdown. When the wave breaks and starts reducing in height, the radiation stress decreases as the amount of water that is elevated decreases. When this happens, the mean surface level increases — this is known as the setup.
In the formation of a rip current, a wave propagates over a sandbar with a gap in it. When this happens, most of the wave breaks on the sandbar, leading to "setup". The part of the wave that propagates over the gap does not break, and the "setdown" continues in that part. Because of this phenomenon, the mean water surface over the rest of the sandbar is higher than that which is over the gap. The result is a strong flow outward through the gap. This strong flow is the rip current.
The vorticity and inertia of rip currents have been studied. From a model of the vorticity of a rip current done at Scripps Institute of Oceanography, it was found that as a fast rip current extends away from shallow water, the vorticity of the current increases, and the width of the current decreases. This model acknowledges that friction plays a role and waves are irregular in nature. From data from Sector-Scanning Doppler Sonar at Scripps Institute of Oceanography, it was found that rip currents in La Jolla, California, lasted several minutes, that they reoccurred one to four times per hour, and that they created a wedge with a 45° arch and a radius of 200–400 meters.
Visible characteristics
Rip currents have a characteristic appearance, and, with some experience, they can be visually identified from the shore before entering the water. This is helpful to lifeguards, swimmers, surfers, boaters, divers and other water users, who may need to avoid a rip, or in some cases make use of the flow.
Rip currents often look somewhat like a road or river running straight out to sea. They are easiest to notice and identify when the zone of breaking waves is viewed from a high vantage point. The following are some visual characteristics that can be used to identify a rip:
A noticeable break in the pattern of the waves — the water often looks flat at the rip, in contrast to the lines of breaking waves on either side of the rip.
A "river" of foam — the surface of the rip sometimes looks foamy, because the current is carrying foam from the surf out to open water.
Different color — the rip may differ in color from the surrounding water. It is often more opaque, cloudier, or muddier, and so, depending on the angle of the sun, the rip may show as darker or lighter than the surrounding water.
It is sometimes possible to see that foam or floating debris on the surface of the rip is moving out, away from the shore. In contrast, in the surrounding areas of breaking waves, floating objects and foam are being pushed towards the shore.
These characteristics are helpful in learning to recognize and understand the nature of rip currents. Learning these signs can enable a person to recognize the presence and position of rips before entering the water, which is an important skill as studies show the majority of people are unable to identify a rip current and therefore unable to identify safe places to swim.
In the United States, some beaches have signs created by the National Oceanic and Atmospheric Administration (NOAA) and United States Lifesaving Association, explaining what a rip current is and how to escape one. These signs are titled, "Rip Currents; Break the Grip of the Rip". Two of these signs are shown in the image at the top of this article. Beachgoers can get information from lifeguards, who are always watching for rip currents, and who will move their safety flags so that swimmers can avoid rips.
Danger to swimmers
Rip currents are a potential source of danger for people in shallow water with breaking waves, whether this is in seas, oceans or large lakes. Rip currents are the proximate cause of 80% of rescues carried out by beach lifeguards.
Rip currents typically flow at about . They can be as fast as , which is faster than any human can swim. Most rip currents are fairly narrow, and even the widest rip currents are not very wide. Swimmers can usually exit the rip easily by swimming at a right angle to the flow, parallel to the beach. Swimmers who are unaware of this fact may exhaust themselves trying unsuccessfully to swim directly against the flow. The flow of the current fades out completely at the head of the rip, outside the zone of the breaking waves, so there is a definite limit to how far the swimmer will be taken out to sea by the flow of a rip current.
In a rip current, death by drowning occurs when a person has limited water skills and panics, or when a swimmer persists in trying to swim to shore against a strong rip current, and eventually becomes exhausted and drowns.
According to the NOAA rip currents caused an average of 71 deaths annually in the United States over the ten years ending in 2022 (with 69 in 2022).
A 2013 Australian study found that rips killed more people in Australia than bushfires, floods, cyclones and shark attacks combined.
Survival
People caught in a rip current may notice that they are moving away from the shore quite rapidly. Often, it is not possible to swim directly back to shore against a rip current, so this is not recommended. Contrary to popular misunderstanding, a rip does not pull a swimmer under the water. It carries the swimmer away from the shore in a narrow band of moving water.
A rip current is like a moving treadmill, which the swimmer can get out of quite easily by swimming at a right angle, across the current, i.e. parallel to the shore in either direction. Rip currents are usually not very wide, so getting out of one only takes a few strokes. Once out of the rip current, getting back to shore is not difficult, since waves are breaking, and floating objects, including swimmers, will be pushed by the waves towards the shore.
As an alternative, people who are caught in a strong rip can simply relax, either floating or treading water, and allow the current to carry them until it dissipates completely once it is beyond the surf line. Then the person can signal for help, or swim back through the surf, doing so diagonally, away from the rip and towards the shore.
It is necessary for coastal swimmers to understand the danger of rip currents, to learn how to recognize them, and how to deal with them. And when possible, it is necessary that people enter the water only in areas where lifeguards are on duty.
In a planned trial in a large rip current at Muriwai Beach in New Zealand, an Australian researcher from the School of Biological, Earth and Environmental Sciences, UNSW Sydney found that "just swim to the side" would not work as the rip current was too wide to see its sides, and said that, despite a rescue boat being near, he was unable to relax and not panic. The current took him 300 metres along the beach in a channel feeding the rip current, and then 400 metres offshore at "speeds approaching those of swimming world records".
Uses
Experienced and knowledgeable water users, including surfers, body boarders, divers, surf lifesavers and kayakers, when they wish to get out beyond the breaking waves, will sometimes use a rip current as a rapid and effortless means of transportation.
See also
Cross sea
Longshore drift
Rip current statementwarnings issued by the U.S. National Weather Service
Undertow (water waves)
Rip tide
Baïne
References
External links
NOAA glossary of terms used in describing rip currents
by Surf Life Saving Australia showcasing several signs to spot rips.
Physical oceanography
Bodies of water
Surf lifesaving
Oceanographical terminology | Rip current | [
"Physics"
] | 2,641 | [
"Applied and interdisciplinary physics",
"Physical oceanography"
] |
313,373 | https://en.wikipedia.org/wiki/Rip%20tide | A rip tide, or riptide, is a strong offshore current that is caused by the tide pulling water through an inlet along a barrier beach, at a lagoon or inland marina where tide water flows steadily out to sea during ebb tide. It is a strong tidal flow of water within estuaries and other enclosed tidal areas. The riptides become the strongest where the flow is constricted. When there is a falling or ebbing tide, the outflow water is strongly flowing through an inlet toward the sea, especially once stabilised by jetties.
Dynamics
During these falling and ebbing tides, a riptide can carry a person far offshore. For example, the ebbing tide at Shinnecock Inlet in Southampton, New York, extends more than offshore. Because of this, riptides are typically more powerful than rip currents.
During slack tide, the water is motionless for a short period of time until the flooding or rising tide starts pushing the sea water landward through the inlet. Riptides also occur at constricted areas in bays and lagoons where there are no waves near an inlet.
These strong, reversing currents can also be termed ebb jets, flood jet, or tidal jets by coastal engineers because they carry large quantities of sand outward that form sandbars far out in the ocean or into the bay outside the inlet channel. The term "ebb jet" would be used for a tidal current leaving an enclosed tidal area, and "flood jet" for the equivalent tidal current entering it.
Rip tide and rip currents
The term rip tide is often incorrectly used to refer to rip currents, which are not tidal flows. A rip current is a strong, narrow jet of water that moves away from the beach and into the ocean as a result of local wave motion. Rip currents can flow quickly, are unpredictable, and come about from what happens to waves as they interact with the shape of the sea bed. In contrast, a rip tide is caused by tidal movements, as opposed to wave action, and is a predictable rise and fall of the water level.
The United States National Oceanic and Atmospheric Administration comments:
Rip currents are not rip tides. A specific type of current associated with tides may include both the ebb and flood tidal currents that are caused by egress and ingress of the tide through inlets and the mouths of estuaries, embayments, and harbors. These currents may cause drowning deaths, but these tidal currents or tidal jets are separate and distinct phenomena from rip currents. Recommended terms for these phenomena include ebb jet, flood jet, or tidal jet.
Surviving rip currents
People often drown by swimming directly against a rip current, which tires them out. People are advised to not fight the current, which is too strong for any swimmer. People should not try to swim directly inwards, towards the beach. They should relax, and swim parallel to the beach. Eventually, they will be out of the rip current.
See also
Rip current
Baïne
Tidal bore
References
Oceanography
Physical oceanography
Bodies of water
Tides | Rip tide | [
"Physics",
"Environmental_science"
] | 620 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics",
"Physical oceanography"
] |
313,384 | https://en.wikipedia.org/wiki/Long%20division | In arithmetic, long division is a standard division algorithm suitable for dividing multi-digit Hindu-Arabic numerals (positional notation) that is simple enough to perform by hand. It breaks down a division problem into a series of easier steps.
As in all division problems, one number, called the dividend, is divided by another, called the divisor, producing a result called the quotient. It enables computations involving arbitrarily large numbers to be performed by following a series of simple steps. The abbreviated form of long division is called short division, which is almost always used instead of long division when the divisor has only one digit.
History
Related algorithms have existed since the 12th century.
Al-Samawal al-Maghribi (1125–1174) performed calculations with decimal numbers that essentially require long division, leading to infinite decimal results, but without formalizing the algorithm.
Caldrini (1491) is the earliest printed example of long division, known as the Danda method in medieval Italy, and it became more practical with the introduction of decimal notation for fractions by Pitiscus (1608).
The specific algorithm in modern use was introduced by Henry Briggs 1600.
Education
Inexpensive calculators and computers have become the most common way to solve division problems, eliminating a traditional mathematical exercise and decreasing the educational opportunity to show how to do so by paper and pencil techniques. (Internally, those devices use one of a variety of division algorithms, the faster of which rely on approximations and multiplications to achieve the tasks.) In North America, long division has been especially targeted for de-emphasis or even elimination from the school curriculum by reform mathematics, though it has been traditionally introduced in the 4th, 5th or even 6th grades.
Method
In English-speaking countries, long division does not use the division slash or division sign symbols but instead constructs a tableau. The divisor is separated from the dividend by a right parenthesis or vertical bar ; the dividend is separated from the quotient by a vinculum (i.e., an overbar). The combination of these two symbols is sometimes known as a long division symbol or division bracket. It developed in the 18th century from an earlier single-line notation separating the dividend from the quotient by a left parenthesis.
The process is begun by dividing the left-most digit of the dividend by the divisor. The quotient (rounded down to an integer) becomes the first digit of the result, and the remainder is calculated (this step is notated as a subtraction). This remainder carries forward when the process is repeated on the following digit of the dividend (notated as 'bringing down' the next digit to the remainder). When all digits have been processed and no remainder is left, the process is complete.
An example is shown below, representing the division of 500 by 4 (with a result of 125).
125 (Explanations)
4)500
4 ( 4 × 1 = 4)
10 ( 5 - 4 = 1)
8 ( 4 × 2 = 8)
20 (10 - 8 = 2)
20 ( 4 × 5 = 20)
0 (20 - 20 = 0)
A more detailed breakdown of the steps goes as follows:
Find the shortest sequence of digits starting from the left end of the dividend, 500, that the divisor 4 goes into at least once. In this case, this is simply the first digit, 5. The largest number that the divisor 4 can be multiplied by without exceeding 5 is 1, so the digit 1 is put above the 5 to start constructing the quotient.
Next, the 1 is multiplied by the divisor 4, to obtain the largest whole number that is a multiple of the divisor 4 without exceeding the 5 (4 in this case). This 4 is then placed under and subtracted from the 5 to get the remainder, 1, which is placed under the 4 under the 5.
Afterwards, the first as-yet unused digit in the dividend, in this case the first digit 0 after the 5, is copied directly underneath itself and next to the remainder 1, to form the number 10.
At this point the process is repeated enough times to reach a stopping point: The largest number by which the divisor 4 can be multiplied without exceeding 10 is 2, so 2 is written above as the second leftmost quotient digit. This 2 is then multiplied by the divisor 4 to get 8, which is the largest multiple of 4 that does not exceed 10; so 8 is written below 10, and the subtraction 10 minus 8 is performed to get the remainder 2, which is placed below the 8.
The next digit of the dividend (the last 0 in 500) is copied directly below itself and next to the remainder 2 to form 20. Then the largest number by which the divisor 4 can be multiplied without exceeding 20, which is 5, is placed above as the third leftmost quotient digit. This 5 is multiplied by the divisor 4 to get 20, which is written below and subtracted from the existing 20 to yield the remainder 0, which is then written below the second 20.
At this point, since there are no more digits to bring down from the dividend and the last subtraction result was 0, we can be assured that the process finished.
If the last remainder when we ran out of dividend digits had been something other than 0, there would have been two possible courses of action:
We could just stop there and say that the dividend divided by the divisor is the quotient written at the top with the remainder written at the bottom, and write the answer as the quotient followed by a fraction that is the remainder divided by the divisor.
We could extend the dividend by writing it as, say, 500.000... and continue the process (using a decimal point in the quotient directly above the decimal point in the dividend), in order to get a decimal answer, as in the following example.
31.75
4)127.00
12 (12 ÷ 4 = 3)
07 (0 remainder, bring down next figure)
4 (7 ÷ 4 = 1 r 3)
3.0 (bring down 0 and the decimal point)
2.8 (7 × 4 = 28, 30 ÷ 4 = 7 r 2)
20 (an additional zero is brought down)
20 (5 × 4 = 20)
0
In this example, the decimal part of the result is calculated by continuing the process beyond the units digit, "bringing down" zeros as being the decimal part of the dividend.
This example also illustrates that, at the beginning of the process, a step that produces a zero can be omitted. Since the first digit 1 is less than the divisor 4, the first step is instead performed on the first two digits 12. Similarly, if the divisor were 13, one would perform the first step on 127 rather than 12 or 1.
Basic procedure for long division of
Find the location of all decimal points in the dividend and divisor .
If necessary, simplify the long division problem by moving the decimals of the divisor and dividend by the same number of decimal places, to the right (or to the left), so that the decimal of the divisor is to the right of the last digit.
When doing long division, keep the numbers lined up straight from top to bottom under the tableau.
After each step, be sure the remainder for that step is less than the divisor. If it is not, there are three possible problems: the multiplication is wrong, the subtraction is wrong, or a greater quotient is needed.
In the end, the remainder, , is added to the growing quotient as a fraction, .
Invariant property and correctness
The basic presentation of the steps of the process (above)
focus on the what steps are to be performed,
rather than the properties of those steps that ensure the result will be correct
(specifically, that q × m + r = n, where q is the final quotient and r the final remainder).
A slight variation of presentation requires more writing,
and requires that we change, rather than just update, digits of the quotient,
but can shed more light on why these steps actually produce the right answer
by allowing evaluation of q × m + r at intermediate points in the process.
This illustrates the key property used in the derivation of the algorithm
(below).
Specifically, we amend the above basic procedure so that
we fill the space after the digits of the quotient under construction with 0's, to at least the 1's place,
and include those 0's in the numbers we write below the division bracket.
This lets us maintain an invariant relation at every step:
q × m + r = n, where q is the partially-constructed quotient (above the division bracket)
and r the partially-constructed remainder (bottom number below the division bracket).
Note that, initially q=0 and r=n, so this property holds initially;
the process reduces r and increases q with each step,
eventually stopping when r<m if we seek the answer in quotient + integer remainder form.
Revisiting the 500 ÷ 4 example above, we find
125 (q, changes from 000 to 100 to 120 to 125 as per notes below)
4)500
400 ( 4 × 100 = 400)
100 (500 - 400 = 100; now q=100, r=100; note q×4+r = 500.)
80 ( 4 × 20 = 80)
20 (100 - 80 = 20; now q=120, r= 20; note q×4+r = 500.)
20 ( 4 × 5 = 20)
0 ( 20 - 20 = 0; now q=125, r= 0; note q×4+r = 500.)
Example with multi-digit divisor
A divisor of any number of digits can be used. In this example, 1260257 is to be divided by 37. First the problem is set up as follows:
37)1260257
Digits of the number 1260257 are taken until a number greater than or equal to 37 occurs. So 1 and 12 are less than 37, but 126 is greater. Next, the greatest multiple of 37 less than or equal to 126 is computed. So 3 × 37 = 111 < 126, but 4 × 37 > 126. The multiple 111 is written underneath the 126 and the 3 is written on the top where the solution will appear:
3
37)1260257
111
Note carefully which place-value column these digits are written into. The 3 in the quotient goes in the same column (ten-thousands place) as the 6 in the dividend 1260257, which is the same column as the last digit of 111.
The 111 is then subtracted from the line above, ignoring all digits to the right:
3
37)1260257
111
15
Now the digit from the next smaller place value of the dividend is copied down and appended to the result 15:
3
37)1260257
111
150
The process repeats: the greatest multiple of 37 less than or equal to 150 is subtracted. This is 148 = 4 × 37, so a 4 is added to the top as the next quotient digit. Then the result of the subtraction is extended by another digit taken from the dividend:
34
37)1260257
111
150
148
22
The greatest multiple of 37 less than or equal to 22 is 0 × 37 = 0. Subtracting 0 from 22 gives 22, we often don't write the subtraction step. Instead, we simply take another digit from the dividend:
340
37)1260257
111
150
148
225
The process is repeated until 37 divides the last line exactly:
34061
37)1260257
111
150
148
225
222
37
Mixed mode long division
For non-decimal currencies (such as the British £sd system before 1971) and measures (such as avoirdupois) mixed mode division must be used. Consider dividing 50 miles 600 yards into 37 pieces:
mi - yd - ft - in
1 - 634 1 9 r. 15"
37) 50 - 600 - 0 - 0
37 22880 66 348
13 23480 66 348
1760 222 37 333
22880 128 29 15
===== 111 348 ==
170 ===
148
22
66
==
Each of the four columns is worked in turn. Starting with the miles: 50/37 = 1 remainder 13. No further division is
possible, so perform a long multiplication by 1,760 to convert miles to yards, the result is 22,880 yards. Carry this to the top of the yards column and add it to the 600 yards in the dividend giving 23,480. Long division of 23,480 / 37 now proceeds as normal yielding 634 with remainder 22. The remainder is multiplied by 3 to get feet and carried up to the feet column. Long division of the feet gives 1 remainder 29 which is then multiplied by twelve to get 348 inches. Long division continues with the final remainder of 15 inches being shown on the result line.
Interpretation of decimal results
When the quotient is not an integer and the division process is extended beyond the decimal point, one of two things can happen:
The process can terminate, which means that a remainder of 0 is reached; or
A remainder could be reached that is identical to a previous remainder that occurred after the decimal points were written. In the latter case, continuing the process would be pointless, because from that point onward the same sequence of digits would appear in the quotient over and over. So a bar is drawn over the repeating sequence to indicate that it repeats forever (i.e., every rational number is either a terminating or repeating decimal).
Notation in non-English-speaking countries
China, Japan, Korea use the same notation as English-speaking nations including India. Elsewhere, the same general principles are used, but the figures are often arranged differently.
Latin America
In Latin America (except Argentina, Bolivia, Mexico, Colombia, Paraguay, Venezuela, Uruguay and Brazil), the calculation is almost exactly the same, but is written down differently as shown below with the same two examples used above. Usually the quotient is written under a bar drawn under the divisor. A long vertical line is sometimes drawn to the right of the calculations.
500 ÷ 4 = 125 (Explanations)
4 ( 4 × 1 = 4)
10 ( 5 - 4 = 1)
8 ( 4 × 2 = 8)
20 (10 - 8 = 2)
20 ( 4 × 5 = 20)
0 (20 - 20 = 0)
and
127 ÷ 4 = 31.75
124
30 (bring down 0; decimal to quotient)
28 (7 × 4 = 28)
20 (an additional zero is added)
20 (5 × 4 = 20)
0
In Mexico, the English-speaking world notation is used, except that only the result of the subtraction is annotated and the calculation is done mentally, as shown below:
125 (Explanations)
4)500
10 ( 5 - 4 = 1)
20 (10 - 8 = 2)
0 (20 - 20 = 0)
In Bolivia, Brazil, Paraguay, Venezuela, French-speaking Canada, Colombia, and Peru, the European notation (see below) is used, except that the quotient is not separated by a vertical line, as shown below:
127|4
−124 31,75
30
−28
20
−20
0
Same procedure applies in Mexico, Uruguay and Argentina, only the result of the subtraction is annotated and the calculation is done mentally.
Eurasia
In Spain, Italy, France, Portugal, Lithuania, Romania, Turkey, Greece, Belgium, Belarus, Ukraine, and Russia, the divisor is to the right of the dividend, and separated by a vertical bar. The division also occurs in the column, but the quotient (result) is written below the divider, and separated by the horizontal line. The same method is used in Iran, Vietnam, and Mongolia.
127|4
−124|31,75
30
−28
20
−20
0
In Cyprus, as well as in France, a long vertical bar separates the dividend and subsequent subtractions from the quotient and divisor, as in the example below of 6359 divided by 17, which is 374 with a remainder of 1.
6359|17
−51 |374
125 |
−119 |
69|
−68|
1|
Decimal numbers are not divided directly, the dividend and divisor are multiplied by a power of ten so that the division involves two whole numbers. Therefore, if one were dividing 12,7 by 0,4 (commas being used instead of decimal points), the dividend and divisor would first be changed to 127 and 4, and then the division would proceed as above.
In Austria, Germany and Switzerland, the notational form of a normal equation is used. <dividend> : <divisor> = <quotient>, with the colon ":" denoting a binary infix symbol for the division operator (analogous to "/" or "÷"). In these regions the decimal separator is written as a comma. (cf. first section of Latin American countries above, where it's done virtually the same way):
127 : 4 = 31,75
−12
07
−4
30
−28
20
−20
0
The same notation is adopted in Denmark, Norway, Bulgaria, North Macedonia, Poland, Croatia, Slovenia, Hungary, Czech Republic, Slovakia, Vietnam and in Serbia.
In the Netherlands, the following notation is used:
12 / 135 \ 11,25
12
15
12
30
24
60
60
0
In Finland, the Italian method detailed above was replaced by the Anglo-American one in the 1970s. In the early 2000s, however, some textbooks have adopted the German method as it retains the order between the divisor and the dividend.
Algorithm for arbitrary base
Every natural number can be uniquely represented in an arbitrary number base as a sequence of digits where for all , where is the number of digits in . The value of in terms of its digits and the base is
Let be the dividend and be the divisor, where is the number of digits in . If , then quotient and remainder . Otherwise, we iterate from , before stopping.
For each iteration , let be the quotient extracted so far, be the intermediate dividend, be the intermediate remainder, be the next digit of the original dividend, and be the next digit of the quotient. By definition of digits in base , . By definition of remainder, . All values are natural numbers. We initiate
the first digits of .
With every iteration, the three equations are true:
There only exists one such such that .
The final quotient is and the final remainder is
Examples
In base 10, using the example above with and , the initial values and .
Thus, and .
In base 16, with and , the initial values are and .
Thus, and .
If one doesn't have the addition, subtraction, or multiplication tables for base memorised, then this algorithm still works if the numbers are converted to decimal and at the end are converted back to base . For example, with the above example,
and
with . The initial values are and .
Thus, and .
This algorithm can be done using the same kind of pencil-and-paper notations as shown in above sections.
d8f45 r. 5
12 ) f412df
ea
a1
90
112
10e
4d
48
5f
5a
5
Rational quotients
If the quotient is not constrained to be an integer, then the algorithm does not terminate for . Instead, if then by definition. If the remainder is equal to zero at any iteration, then the quotient is a -adic fraction, and is represented as a finite decimal expansion in base positional notation. Otherwise, it is still a rational number but not a -adic rational, and is instead represented as an infinite repeating decimal expansion in base positional notation.
Binary division
Performance
On each iteration, the most time-consuming task is to select . We know that there are possible values, so we can find using comparisons. Each comparison will require evaluating . Let be the number of digits in the dividend and be the number of digits in the divisor . The number of digits in . The multiplication of is therefore , and likewise the subtraction of . Thus it takes to select . The remainder of the algorithm are addition and the digit-shifting of and to the left one digit, and so takes time and in base , so each iteration takes , or just . For all digits, the algorithm takes time , or in base .
Generalizations
Rational numbers
Long division of integers can easily be extended to include non-integer dividends, as long as they are rational. This is because every rational number has a recurring decimal expansion. The procedure can also be extended to include divisors which have a finite or terminating decimal expansion (i.e. decimal fractions). In this case the procedure involves multiplying the divisor and dividend by the appropriate power of ten so that the new divisor is an integer – taking advantage of the fact that a ÷ b = (ca) ÷ (cb) – and then proceeding as above.
Polynomials
A generalised version of this method called polynomial long division is also used for dividing polynomials (sometimes using a shorthand version called synthetic division).
See also
Algorism
Arbitrary-precision arithmetic
Egyptian multiplication and division
Elementary arithmetic
Fourier division
Polynomial long division
Short division
References
External links
Long Division Algorithm
Long Division and Euclid's Lemma
Algorithms
Computer arithmetic algorithms
Digit-by-digit algorithms
Division (mathematics) | Long division | [
"Mathematics"
] | 4,541 | [
"Applied mathematics",
"Algorithms",
"Mathematical logic"
] |
313,398 | https://en.wikipedia.org/wiki/Wingtip%20device | Wingtip devices are intended to improve the efficiency of fixed-wing aircraft by reducing drag. Although there are several types of wing tip devices which function in different manners, their intended effect is always to reduce an aircraft's drag. Wingtip devices can also improve aircraft handling characteristics and enhance safety for following aircraft. Such devices increase the effective aspect ratio of a wing without greatly increasing the wingspan. Extending the span would lower lift-induced drag, but would increase parasitic drag and would require boosting the strength and weight of the wing. At some point, there is no net benefit from further increased span. There may also be operational considerations that limit the allowable wingspan (e.g., available width at airport gates).
Wingtip devices help prevent the flow around the wingtip of higher pressure air under the wing flowing to the lower pressure surface on top at the wingtip, which results in a vortex caused by the forward motion of the aircraft. Winglets also reduce the lift-induced drag caused by wingtip vortices and improve lift-to-drag ratio. This increases fuel efficiency in powered aircraft and increases cross-country speed in gliders, in both cases increasing range. U.S. Air Force studies indicate that a given improvement in fuel efficiency correlates directly with the causal increase in the aircraft's lift-to-drag ratio.
Early history
Wing end-plates
The initial concept dates back to 1897, when English engineer Frederick W. Lanchester patented wing end-plates as a method for controlling wingtip vortices. In the United States, Scottish-born engineer William E. Somerville patented the first functional winglets in 1910. Somerville installed the devices on his early biplane and monoplane designs. Vincent Burnelli received US Patent no: 1,774,474 for his "Airfoil Control Means" on August 26, 1930.
Simple flat end-plates did not cause a reduction in drag, because the increase in profile drag was greater than the decrease in induced drag.
Hoerner wing tips
Following the end of World War II, Dr. Sighard F. Hoerner was a pioneer researcher in the field, having written a technical paper published in 1952 that called for drooped wingtips whose pointed rear tips focused the resulting wingtip vortex away from the upper wing surface. Drooped wingtips are often called "Hoerner tips" in his honor. Gliders and light aircraft have made use of Hoerner tips for many years.
The earliest-known implementation of a Hoerner-style downward-angled "wingtip device" on a jet aircraft was during World War II. This was the so-called "Lippisch-Ohren" (Lippisch-ears), allegedly attributed to the Messerschmitt Me 163's designer Alexander Lippisch, and first added to the M3 and M4 third and fourth prototypes of the Heinkel He 162A Spatz jet light fighter for evaluation. This addition was done in order to counteract the dutch roll characteristic present in the original He 162 design, related to its wings having a marked dihedral angle. This became a standard feature of the approximately 320 completed He 162A jet fighters built, with hundreds more He 162A airframes going unfinished by V-E Day.
Winglet
The term "winglet" was previously used to describe an additional lifting surface on an aircraft, like a short section between wheels on fixed undercarriage. Richard Whitcomb's research in the 1970s at NASA first used winglet with its modern meaning referring to near-vertical extension of the wing tips. The upward angle (or cant) of the winglet, its inward or outward angle (or toe), as well as its size and shape are critical for correct performance and are unique in each application. The wingtip vortex, which rotates around from below the wing, strikes the cambered surface of the winglet, generating a force that angles inward and slightly forward, analogous to a sailboat sailing close hauled. The winglet converts some of the otherwise-wasted energy in the wingtip vortex to an apparent thrust. This small contribution can be worthwhile over the aircraft's lifetime, provided the benefit offsets the cost of installing and maintaining the winglets.
Another potential benefit of winglets is that they reduce the intensity of wake vortices. Those trail behind the plane and pose a hazard to other aircraft. Minimum spacing requirements between aircraft operations at airports are largely dictated by these factors. Aircraft are classified by weight (e.g. "Light", "Heavy", etc.) because the vortex strength grows with the aircraft lift coefficient, and thus, the associated turbulence is greatest at low speed and high weight, which produced a high angle of attack.
Winglets and wingtip fences also increase efficiency by reducing vortex interference with laminar airflow near the tips of the wing, by 'moving' the confluence of low-pressure (over wing) and high-pressure (under wing) air away from the surface of the wing. Wingtip vortices create turbulence, originating at the leading edge of the wingtip and propagating backwards and inboard. This turbulence 'delaminates' the airflow over a small triangular section of the outboard wing, which destroys lift in that area. The fence/winglet drives the area where the vortex forms upward away from the wing surface, since the center of the resulting vortex is now at the tip of the winglet.
The fuel economy improvement from winglets increases with the mission length. Blended winglets allow a steeper angle of attack reducing takeoff distance.
Early development
Richard T. Whitcomb, an engineer at NASA's Langley Research Center, further developed Hoerner's concept in response to the sharp increase in the cost of fuel after the 1973 oil crisis. With careful aeronautical design he showed that, for a given bending moment, a near-vertical winglet offers a greater drag reduction compared to a horizontal span extension. Whitcomb's designs were flight-tested in 1979–80 by a joint NASA/Air Force team, using a KC-135 Stratotanker based at the Dryden Flight Research Center. A Lockheed L-1011 and McDonnell Douglas DC-10 were also used for testing, and the latter design was directly implemented by McDonnell Douglas on the derivative MD-11, which was rolled out in 1990.
In May 1983, a high school student at Bowie High School in Maryland won a grand prize at the 34th International Science and Engineering Fair in Albuquerque, New Mexico for the result of his research on wingtip devices to reduce drag. The same month, he filed a U.S. patent for "wingtip airfoils", published in 1986.
Applications
NASA
NASA's most notable application of wingtip devices is on the Boeing 747 Shuttle Carrier Aircraft. Located on the 747's horizontal stabilizers, the devices increase the tailplane's effectiveness under the weight of the Space Shuttle orbiter, though these were more for directional stability than for drag reduction.
Business aircraft
Learjet exhibited the prototype Learjet 28 at the 1977 National Business Aviation Association convention. It employed the first winglets ever used on a production aircraft, either civilian or military. Learjet developed the winglet design without NASA assistance. Although the Model 28 was intended to be a prototype experimental aircraft, performance was such that it resulted in a production commitment from Learjet. Flight tests showed that the winglets increased range by about 6.5 percent and improved directional stability. Learjet's application of winglets to production aircraft continued with newer models including the Learjet 55, 31, 60, 45, and Learjet 40.
Gulfstream Aerospace explored winglets in the late 1970s and incorporated winglets in the Gulfstream III, Gulfstream IV and Gulfstream V. The Gulfstream V range of allows nonstop routes such as New York–Tokyo, it holds over 70 world and national flight records.
The Rutan combined winglets-vertical stabilizer appeared on his Beechcraft Starship business aircraft design that first flew in 1986.
Winglets are also applied to other business aircraft, reducing take-off distance to operate from smaller airports, and allowing higher cruise altitudes. Along winglets on new designs, aftermarket vendors developed retrofits. Winglet Technology, LLC of Wichita, Kansas should have tested its elliptical winglets designed to increase payload-range on hot and high departures to retrofit the Citation X.
Experimental
Conventional winglets were fitted to Rutan's Rutan Voyager, the first aircraft to circumnavigate the world without refueling in 1986. The aircraft's wingtips were damaged, however, when they dragged along the runway during takeoff, removing about from each wingtip, so the flight was made without benefit of winglets.
Airliner fuel efficiency
The average commercial jet sees a 4-6 percent increase in fuel efficiency and as much as a 6% decrease in in-flight noise from the use of winglets. Actual fuel savings and the related carbon output can vary significantly by plane, route and flight conditions.
Wingtip fence
A wingtip fence refers to the winglets including surfaces extending both above and below the wingtip, as described in Whitcomb's early research. Both surfaces are shorter than or equivalent to a winglet possessing similar aerodynamic benefits. The Airbus A310-300 was the first airliner with wingtip fences in 1985. Other Airbus models followed with the A300-600, the A320ceo, and the A380. Other Airbus models including the Airbus A320 Enhanced, A320neo, A350 and A330neo have blended winglets rather than wingtip fences. The Antonov An-158 uses wingtip fences.
Canted winglets
Boeing announced a new version of the 747, the 747-400, in 1985, with an extended range and capacity, using a combination of winglets and increased span to carry the additional load. The winglets increased the 747-400's range by 3.5% over the 747-300, which is otherwise aerodynamically identical but has no winglets. The 747-400D variant lacks the wingtip extensions and winglets included on other 747-400s since winglets would provide minimal benefits on short-haul routes while adding extra weight and cost, although the -400D may be converted to the long-range version if needed. Winglets are preferred for Boeing derivative designs based on existing platforms, because they allow maximum re-use of existing components. Newer designs are favoring increased span, other wingtip devices or a combination of both, whenever possible.
The Ilyushin Il-96 was the first Russian and modern jet to feature winglets in 1988. The Bombardier CRJ-100/200 was the first regional airliner to feature winglets in 1992. The A340/A330 followed with canted winglets in 1993/1994. The Tupolev Tu-204 was the first narrowbody aircraft to feature winglets in 1994. The Airbus A220 (née CSeries), from 2016, has canted winglets.
Blended winglets
A blended winglet is attached to the wing with a smooth curve instead of a sharp angle and is intended to reduce interference drag at the wing/winglet junction. A sharp interior angle in this region can interact with the boundary layer flow causing a drag inducing vortex, negating some of the benefit of the winglet. Seattle-based Aviation Partners develops blended winglets as retrofits for the Gulfstream II, Hawker 800 and the Falcon 2000.
On February 18, 2000, blended winglets were announced as an option for the Boeing 737-800; the first shipset was installed on 14 February 2001 and entered revenue service with Hapag-Lloyd Flug on 8 May 2001. The Aviation Partners/Boeing extensions decrease fuel consumption by 4% for long-range flights and increase range by for the 737-800 or the derivative Boeing Business Jet as standard. Also offered for the 737 Classic, many operators have retrofitted their fleets with these for the fuel savings. Aviation Partners Boeing also offers blended winglets for the 757 and 767-300ER. In 2006 Airbus tested two candidate blended winglets, designed by Winglet Technology and Airbus for the Airbus A320 family. In 2009 Airbus launched its "Sharklet" blended winglet, designed to enhance the payload-range of its A320 family and reduce fuel burn by up to 4% over longer sectors. This corresponds to an annual CO2 reduction of 700 tonnes per aircraft. The A320s fitted with Sharklets were delivered beginning in 2012. They are used on the A320neo, the A330neo and the A350. They are also offered as a retrofit option.
Raked wingtip
Raked wingtips, where the tip has a greater wing sweep than the rest of the wing, are featured on some Boeing Commercial Airplanes to improve fuel efficiency, takeoff and climb performance. Like winglets, they increase the effective wing aspect ratio and diminish wingtip vortices, decreasing lift-induced drag. In testing by Boeing and NASA, they reduce drag by as much as 5.5%, compared to 3.5% to 4.5% for conventional winglets. While an increase in span would be more effective than a same-length winglet, its bending moment is greater. A winglet gives the performance gain of a span increase but has the bending force of a span increase.
Raked wingtips offer several weight-reduction advantages relative to simply extending the conventional main wingspan. At high load-factor structural design conditions, the smaller chords of the wingtip are subjected to less load, and they result in less induced loading on the outboard main wing. Additionally, the leading-edge sweep results in the center of pressure being located farther aft than for simple extensions of the span of conventional main wings. At high load factors, this relative aft location of the center of pressure causes the raked wingtip to be twisted more leading-edge down, reducing the bending moment on the inboard wing. However, the relative aft-movement of the center of pressure accentuates flutter.
Raked wingtips are installed on the Boeing 767-400ER (first flight on October 9, 1999), all generations of Boeing 777 (June 12, 1994) including the upcoming 777X, the 737-derived Boeing P-8 Poseidon (25 April 2009), all variants of the Boeing 787 (December 15, 2009) (the cancelled Boeing 787-3 would have had a wingspan to fit in ICAO Aerodrome Reference Code D, as its wingspan was decreased by using blended winglets instead of raked wingtips ), and the Boeing 747-8 (February 8, 2010). The Embraer E-jet E2 and C-390 Millennium wings also have raked wingtips.
Split-tip
The McDonnell Douglas MD-11 was the first aircraft with split-tip winglets in 1990.
For the 737 Next Generation, third-party vendor Aviation Partners has introduced a similar design to the 737 MAX wingtip device known as the split scimitar winglet, with United Airlines as the launch customer.
The Boeing 737 MAX uses a new type of wingtip device. Resembling a three-way hybrid of a winglet, wingtip fence, and raked wingtip, Boeing claims that this new design should deliver an additional 1.5% improvement in fuel economy over the 10-12% improvement already expected from the 737 MAX.
Gliders
In 1987, mechanical engineer Peter Masak called on aerodynamicist Mark D. Maughmer, an associate professor of aerospace engineering at the Pennsylvania State University, about designing winglets to improve performance on his wingspan racing sailplane. Others had attempted to apply Whitcomb's winglets to gliders before, and they did improve climb performance, but this did not offset the parasitic drag penalty in high-speed cruise. Masak was convinced it was possible to overcome this hurdle. By trial and error, they ultimately developed successful winglet designs for gliding competitions, using a new PSU–90–125 airfoil, designed by Maughmer specifically for the winglet application. At the 1991 World Gliding Championships in Uvalde, Texas, the trophy for the highest speed went to a winglet-equipped 15-meter class limited wingspan glider, exceeding the highest speed in the unlimited span Open Class, an exceptional result. Masak went on to win the 1993 U.S. 15 Meter Nationals gliding competition, using winglets on his prototype Masak Scimitar.
The Masak winglets were originally retrofitted to production sailplanes, but within 10 years of their introduction, most high-performance gliders were equipped from the factory with winglets or other wingtip devices. It took over a decade for winglets to first appear on a production airliner, the original application that was the focus of the NASA development. Yet, once the advantages of winglets were proven in competition, adoption was swift with gliders. The point difference between the winner and the runner-up in soaring competition is often less than one percent, so even a small improvement in efficiency is a significant competitive advantage. Many non-competition pilots fitted winglets for handling benefits such as increased roll rate and roll authority and reduced tendency for wing tip stall. The benefits are notable, because sailplane winglets must be removable to allow the glider to be stored in a trailer, so they are usually installed only at the pilot's preference.
The Glaser-Dirks DG-303, an early glider derivative design, incorporating winglets as factory standard equipment.
Non-planar wingtip
Aviation Partners developed and flight tested a closed-surface Spiroid winglet on a Falcon 50 in 2010.
Non-planar wingtips are normally angled upwards in a polyhedral wing configuration, increasing the local dihedral near the wing tip, with polyhedral wing designs themselves having been popular on free-flight model aircraft designs for decades. Non-planar wingtips provide the wake control benefit of winglets, with less parasitic drag penalty, if designed carefully. The non-planar wing tip is often swept back like a raked wingtip and may also be combined with a winglet. A winglet is also a special case of a non-planar wingtip.
Aircraft designers employed mostly planar wing designs with simple dihedral after World War II, prior to the introduction of winglets. With the wide acceptance of winglets in new sailplane designs of the 1990s, designers sought to further optimize the aerodynamic performance of their wingtip designs. Glider winglets were originally retrofitted directly to planar wings, with only a small, nearly right-angle, transition area. Once the performance of the winglet itself was optimized, attention was turned to the transition between the wing and winglet. A common application was tapering the transition area from the wing tip chord to the winglet chord and raking the transition area back, to place the winglet in the optimal position. If the tapered portion was canted upward, the winglet height could also be reduced. Eventually, designers employed multiple non-planar sections, each canting up at a greater angle, dispensing with the winglets entirely.
The Schempp-Hirth Discus-2 and Schempp-Hirth Duo Discus use non-planar wingtips.
Active wingtip device
Tamarack Aerospace Group, a company founded in 2010 by aerospace structural engineer Nicholas Guida, has patented an Active Technology Load Alleviation System (ATLAS), a modified version of a wingtip device. The system uses Tamarack Active Camber Surfaces (TACS) to aerodynamically "switch off" the effects of the wingtip device when the aircraft is experiencing high-g events such as large gusts or severe pull-ups. TACS are movable panels, similar to flaps or ailerons, on the trailing edge of the wing extension. The system is controlled by the aircraft's electrical system and a high-speed servo which is activated when the aircraft senses an oncoming stress event, essentially simulating an actuating wingtip. However, the wingtip itself is fixed and the TACS are the only moving part of the wingtip system. Tamarack first introduced ATLAS for the Cessna Citation family aircraft, and it has been certified for use by the Federal Aviation Administration and European Union Aviation Safety Agency.
In December 2024, Tamarack Aerospace had installed 200 Active Winglet on CitationJet airplanes.
Actuating wingtip device
There has been research into actuating wingtip devices, including a filed patent application, though no aircraft currently uses this feature as described. The XB-70 Valkyrie's wingtips were capable of drooping downward in flight, to facilitate Mach 3 flight using waveriding.
Use on rotating blades
Wingtip devices are also used on rotating propeller, helicopter rotor, and wind turbine blades to reduce drag, reduce diameter, reduce noise and/or improve efficiency. By reducing aircraft blade tip vortices interacting with the ground surface during taxiing, takeoff, and hover, these devices can reduce damage from dirt and small stones picked up in the vortices.
Rotorcraft applications
The main rotor blades of the AgustaWestland AW101 (formerly the EH101) have a distinctive tip shape; pilots have found that this rotor design alters the downwash field and reduces brownout which limits visibility in dusty areas and leads to accidents.
Propeller applications
Hartzell Propeller developed their "Q-tip" propeller used on the Piper PA-42 Cheyenne and several other fixed-wing aircraft types by bending the blade tips back at a 90-degree angle to get the same thrust from a reduced diameter propeller disk; the reduced propeller tip speed reduces noise, according to the manufacturer. Modern scimitar propellers have increased sweepback at the tips, resembling a raked tip on an aircraft wing.
Other applications
Some ceiling fans have wingtip devices. Fan manufacturer Big Ass Fans has claimed that their Isis fan, equipped with wingtip devices, has superior efficiency. However, for certain high-volume, low-speed designs, wingtip devices may not improve efficiency.
Another application of the same principle was introduced to the keel of the "America's Cup"- winning Australian yacht Australia II of 1982, designed by Ben Lexcen.
References
External links
Aircraft aerodynamics
Aircraft configurations
Aircraft wing components
NASA spin-off technologies | Wingtip device | [
"Engineering"
] | 4,586 | [
"Aircraft configurations",
"Aerospace engineering"
] |
313,416 | https://en.wikipedia.org/wiki/Spectrum%20analyzer | A spectrum analyzer measures the magnitude of an input signal versus frequency within the full frequency range of the instrument. The primary use is to measure the power of the spectrum of known and unknown signals. The input signal that most common spectrum analyzers measure is electrical; however, spectral compositions of other signals, such as acoustic pressure waves and optical light waves, can be considered through the use of an appropriate transducer. Spectrum analyzers for other types of signals also exist, such as optical spectrum analyzers which use direct optical techniques such as a monochromator to make measurements.
By analyzing the spectra of electrical signals, dominant frequency, power, distortion, harmonics, bandwidth, and other spectral components of a signal can be observed that are not easily detectable in time domain waveforms. These parameters are useful in the characterization of electronic devices, such as wireless transmitters.
The display of a spectrum analyzer has frequency displayed on the horizontal axis and the amplitude on the vertical axis. To the casual observer, a spectrum analyzer looks like an oscilloscope, which plots amplitude on the vertical axis but time on the horizontal axis. In fact, some lab instruments can function either as an oscilloscope or a spectrum analyzer.
History
The first spectrum analyzers, in the 1960s, were swept-tuned instruments.
Following the discovery of the fast Fourier transform (FFT) in 1965, the first FFT-based analyzers were introduced in 1967.
Today, there are three basic types of analyzer: the swept-tuned spectrum analyzer, the vector signal analyzer, and the real-time spectrum analyzer.
Types
Spectrum analyzer types are distinguished by the methods used to obtain the spectrum of a signal. There are swept-tuned and fast Fourier transform (FFT) based spectrum analyzers:
A swept-tuned analyzer uses a superheterodyne receiver to down-convert a portion of the input signal spectrum to the center frequency of a narrow band-pass filter, whose instantaneous output power is recorded or displayed as a function of time. By sweeping the receiver's center-frequency (using a voltage-controlled oscillator) through a range of frequencies, the output is also a function of frequency. But while the sweep centers on any particular frequency, it may be missing short-duration events at other frequencies.
An FFT analyzer computes a time-sequence of periodograms. FFT refers to a particular mathematical algorithm used in the process. This is commonly used in conjunction with a receiver and analog-to-digital converter. As above, the receiver reduces the center-frequency of a portion of the input signal spectrum, but the portion is not swept. The purpose of the receiver is to reduce the sampling rate that the analyzer must contend with. With a sufficiently low sample-rate, FFT analyzers can process all the samples (100% duty-cycle), and are therefore able to avoid missing short-duration events.
Form factor
Spectrum analyzers tend to fall into four form factors: benchtop, portable, handheld and networked.
Benchtop
This form factor is useful for applications where the spectrum analyzer can be plugged into AC power, which generally means in a lab environment or production/manufacturing area. Bench top spectrum analyzers have historically offered better performance and specifications than the portable or handheld form factor. Bench top spectrum analyzers normally have multiple fans (with associated vents) to dissipate heat produced by the processor. Due to their architecture, bench top spectrum analyzers typically weigh more than . Some bench top spectrum analyzers offer optional battery packs, allowing them to be used away from AC power. This type of analyzer is often referred to as a "portable" spectrum analyzer.
Portable
This form factor is useful for any applications where the spectrum analyzer needs to be taken outside to make measurements or simply carried while in use. Attributes that contribute to a useful portable spectrum analyzer include:
Optional battery-powered operation to allow the user to move freely outside.
Clearly viewable display to allow the screen to be read in bright sunlight, darkness or dusty conditions.
Light weight (usually less than ).
Handheld
This form factor is useful for any application where the spectrum analyzer needs to be very light and small. Handheld analyzers usually offer a limited capability relative to larger systems. Attributes that contribute to a useful handheld spectrum analyzer include:
Very low power consumption.
Battery-powered operation while in the field to allow the user to move freely outside.
Very small size
Light weight (usually less than ).
Networked
This form factor does not include a display and these devices are designed to enable a new class of geographically-distributed spectrum monitoring and analysis applications. The key attribute is the ability to connect the analyzer to a network and monitor such devices across a network. While many spectrum analyzers have an Ethernet port for control, they typically lack efficient data transfer mechanisms and are too bulky or expensive to be deployed in such a distributed manner. Key applications for such devices include RF intrusion detection systems for secure facilities where wireless signaling is prohibited. As well cellular operators are using such analyzers to remotely monitor interference in licensed spectral bands. The distributed nature of such devices enable geo-location of transmitters, spectrum monitoring for dynamic spectrum access and many other such applications.
Key attributes of such devices include:
Network-efficient data transfer
Low power consumption
The ability to synchronize data captures across a network of analyzers
Low cost to enable mass deployment.
Theory of operation
Swept-tuned
As discussed above in types, a swept-tuned spectrum analyzer down-converts a portion of the input signal spectrum to the center frequency of a band-pass filter by sweeping the voltage-controlled oscillator through a range of frequencies, enabling the consideration of the full frequency range of the instrument.
The bandwidth of the band-pass filter dictates the resolution bandwidth, which is related to the minimum bandwidth detectable by the instrument. As demonstrated by the animation to the right, the smaller the bandwidth, the more spectral resolution. However, there is a trade-off between how quickly the display can update the full frequency span under consideration and the frequency resolution, which is relevant for distinguishing frequency components that are close together. For a swept-tuned architecture, this relation for sweep time is useful:
Where ST is sweep time in seconds, k is proportionality constant, Span is the frequency range under consideration in hertz, and RBW is the resolution bandwidth in Hertz.
Sweeping too fast, however, causes a drop in displayed amplitude and a shift in the displayed frequency.
Also, the animation contains both up- and down-converted spectra, which is due to a frequency mixer producing both sum and difference frequencies. The local oscillator feedthrough is due to the imperfect isolation from the IF signal path in the mixer.
For very weak signals, a pre-amplifier is used, although harmonic and intermodulation distortion may lead to the creation of new frequency components that were not present in the original signal.
FFT-based
With an FFT based spectrum analyzer, the frequency resolution is , the inverse of the time T over which the waveform is measured and Fourier transformed.
With Fourier transform analysis in a digital spectrum analyzer, it is necessary to sample the input signal with a sampling frequency that is at least twice the bandwidth of the signal, due to the Nyquist limit. A Fourier transform will then produce a spectrum containing all frequencies from zero to . This can place considerable demands on the required analog-to-digital converter and processing power for the Fourier transform, making FFT based spectrum analyzers limited in frequency range.
Hybrid superheterodyne-FFT
Since FFT based analyzers are only capable of considering narrow bands, one technique is to combine swept and FFT analysis for consideration of wide and narrow spans. This technique allows for faster sweep time.
This method is made possible by first down converting the signal, then digitizing the intermediate frequency and using superheterodyne or FFT techniques to acquire the spectrum.
One benefit of digitizing the intermediate frequency is the ability to use digital filters, which have a range of advantages over analog filters such as near perfect shape factors and improved filter settling time. Also, for consideration of narrow spans, the FFT can be used to increase sweep time without distorting the displayed spectrum.
Realtime FFT
A realtime spectrum analyser does not have any blind time—up to some maximum span, often called the "realtime bandwidth". The analyser is able to sample the incoming RF spectrum in the time domain and convert the information to the frequency domain using the FFT process. FFT's are processed in parallel, gapless and overlapped so there are no gaps in the calculated RF spectrum and no information is missed.
Online realtime and offline realtime
In a sense, any spectrum analyzer that has vector signal analyzer capability is a realtime analyzer. It samples data fast enough to satisfy Nyquist Sampling theorem and stores the data in memory for later processing. This kind of analyser is only realtime for the amount of data / capture time it can store in memory and still produces gaps in the spectrum and results during processing time.
FFT overlapping
Minimizing distortion of information is important in all spectrum analyzers. The FFT process applies windowing techniques to improve the output spectrum due to producing less side lobes. The effect of windowing may also reduce the level of a signal where it is captured on the boundary between one FFT and the next. For this reason FFT's in a Realtime spectrum analyzer are overlapped. Overlapping rate is approximately 80%. An analyzer that utilises a 1024-point FFT process will re-use approximately 819 samples from the previous FFT process.
Minimum signal detection time
This is related to the sampling rate of the analyser and the FFT rate. It is also important for the realtime spectrum analyzer to give good level accuracy.
Example: for an analyser with of realtime bandwidth (the maximum RF span that can be processed in realtime) approximately (complex) are needed. If the spectrum analyzer produces an FFT calculation is produced every For a FFT a full spectrum is produced approximately every This also gives us our overlap rate of 80% (20 μs − 4 μs) / 20 μs = 80%.
Persistence
Realtime spectrum analyzers are able to produce much more information for users to examine the frequency spectrum in more detail. A normal swept spectrum analyzer would produce max peak, min peak displays for example but a realtime spectrum analyzer is able to plot all calculated FFT's over a given period of time with the added colour-coding which represents how often a signal appears. For example, this image shows the difference between how a spectrum is displayed in a normal swept spectrum view and using a "Persistence" view on a realtime spectrum analyzer.
Hidden signals
Realtime spectrum analyzers are able to see signals hidden behind other signals. This is possible because no information is missed and the display to the user is the output of FFT calculations. An example of this can be seen on the right.
Typical functionality
Center frequency and span
In a typical spectrum analyzer there are options to set the start, stop, and center frequency. The frequency halfway between the stop and start frequencies on a spectrum analyzer display is known as the center frequency. This is the frequency that is in the middle of the display's frequency axis. Span specifies the range between the start and stop frequencies. These two parameters allow for adjustment of the display within the frequency range of the instrument to enhance visibility of the spectrum measured.
Resolution bandwidth
As discussed in the operation section, the resolution bandwidth filter or RBW filter is the bandpass filter in the IF path. It's the bandwidth of the RF chain before the detector (power measurement device). It determines the RF noise floor and how close two signals can be and still be resolved by the analyzer into two separate peaks. Adjusting the bandwidth of this filter allows for the discrimination of signals with closely spaced frequency components, while also changing the measured noise floor. Decreasing the bandwidth of an RBW filter decreases the measured noise floor and vice versa. This is due to higher RBW filters passing more frequency components through to the envelope detector than lower bandwidth RBW filters, therefore a higher RBW causes a higher measured noise floor.
Video bandwidth
The video bandwidth filter or VBW filter is the low-pass filter directly after the envelope detector. It's the bandwidth of the signal chain after the detector. Averaging or peak detection then refers to how the digital storage portion of the device records samples—it takes several samples per time step and stores only one sample, either the average of the samples or the highest one. The video bandwidth determines the capability to discriminate between two different power levels. This is because a narrower VBW will remove noise in the detector output. This filter is used to "smooth" the display by removing noise from the envelope. Similar to the RBW, the VBW affects the sweep time of the display if the VBW is less than the RBW. If VBW is less than RBW, this relation for sweep time is useful:
Here tsweep is the sweep time, k is a dimensionless proportionality constant, f2 − f1 is the frequency range of the sweep, RBW is the resolution bandwidth, and VBW is the video bandwidth.
Detector
With the advent of digitally based displays, some modern spectrum analyzers use analog-to-digital converters to sample spectrum amplitude after the VBW filter. Since displays have a discrete number of points, the frequency span measured is also digitised. Detectors are used in an attempt to adequately map the correct signal power to the appropriate frequency point on the display. There are in general three types of detectors: sample, peak, and average
Sample detection – sample detection simply uses the midpoint of a given interval as the display point value. While this method does represent random noise well, it does not always capture all sinusoidal signals.
Peak detection – peak detection uses the maximum measured point within a given interval as the display point value. This insures that the maximum sinusoid is measured within the interval; however, smaller sinusoids within the interval may not be measured. Also, peak detection does not give a good representation of random noise.
Average detection – average detection uses all of the data points within the interval to consider the display point value. This is done by power (rms) averaging, voltage averaging, or log-power averaging.
Displayed average noise level
The Displayed Average Noise Level (DANL) is just what it says it is—the average noise level displayed on the analyzer. This can either be with a specific resolution bandwidth (e.g. −120 dBm @1 kHz RBW), or normalized to 1 Hz (usually in dBm/Hz) e.g. −150 dBm(Hz).This is also called the sensitivity of the spectrum analyzer. If a signal level equal to the average noise level is fed there will be a 3 dB display. To increase the sensitivity of the spectrum analyzer a preamplifier with lower noise figure may be connected at the input of the spectrum analyzer.
Radio-frequency uses
Spectrum analyzers are widely used to measure the frequency response, noise and distortion characteristics of all kinds of radio-frequency (RF) circuitry, by comparing the input and output spectra. For example, in RF mixers, spectrum analyzer is used to find the levels of third order inter-modulation products and conversion loss. In RF oscillators, spectrum analyzer is used to find the levels of different harmonics.
In telecommunications, spectrum analyzers are used to determine occupied bandwidth and track interference sources. For example, cell planners use this equipment to determine interference sources in the GSM frequency bands and UMTS frequency bands.
In EMC testing, a spectrum analyzer is used for basic precompliance testing; however, it can not be used for full testing and certification. Instead, an EMI receiver is used.
A spectrum analyzer is used to determine whether a wireless transmitter is working according to defined standards for purity of emissions. Output signals at frequencies other than the intended communications frequency appear as vertical lines (pips) on the display. A spectrum analyzer is also used to determine, by direct observation, the bandwidth of a digital or analog signal.
A spectrum analyzer interface is a device that connects to a wireless receiver or a personal computer to allow visual detection and analysis of electromagnetic signals over a defined band of frequencies. This is called panoramic reception and it is used to determine the frequencies of sources of interference to wireless networking equipment, such as Wi-Fi and wireless routers.
Spectrum analyzers can also be used to assess RF shielding. RF shielding is of particular importance for the siting of a magnetic resonance imaging machine since stray RF fields would result in artifacts in an MR image.
Audio-frequency uses
Spectrum analysis can be used at audio frequencies to analyse the harmonics of an audio signal. A typical application is to measure the distortion of a nominally sinewave signal; a very-low-distortion sinewave is used as the input to equipment under test, and a spectrum analyser can examine the output, which will have added distortion products, and determine the percentage distortion at each harmonic of the fundamental. Such analysers were at one time described as "wave analysers". Analysis can be carried out by a general-purpose digital computer with a sound card selected for suitable performance and appropriate software. Instead of using a low-distortion sinewave, the input can be subtracted from the output, attenuated and phase-corrected, to give only the added distortion and noise, which can be analysed.
An alternative technique, total harmonic distortion measurement, cancels out the fundamental with a notch filter and measures the total remaining signal, which is total harmonic distortion plus noise; it does not give the harmonic-by-harmonic detail of an analyser.
Spectrum analyzers are also used by audio engineers to assess their work. In these applications, the spectrum analyzer will show volume levels of frequency bands across the typical range of human hearing, rather than displaying a wave. In live sound applications, engineers can use them to pinpoint feedback.
Optical spectrum analyzer
An optical spectrum analyzer uses reflective or refractive techniques to separate out the wavelengths of light. An electro-optical detector is used to measure the intensity of the light, which is then normally displayed on a screen in a similar manner to a radio- or audio-frequency spectrum analyzer.
The input to an optical spectrum analyzer may be simply via an aperture in the instrument's case, an optical fiber or an optical connector to which a fiber-optic cable can be attached.
Different techniques exist for separating out the wavelengths. One method is to use a monochromator, for example a Czerny–Turner design, with an optical detector placed at the output slit. As the grating in the monochromator moves, bands of different frequencies (colors) are 'seen' by the detector, and the resulting signal can then be plotted on a display. More precise measurements (down to MHz in the optical spectrum) can be made with a scanning Fabry–Pérot interferometer along with analog or digital control electronics, which sweep the resonant frequency of an optically resonant cavity using a voltage ramp to piezoelectric motor that varies the distance between two highly reflective mirrors. A sensitive photodiode embedded in the cavity provides an intensity signal, which is plotted against the ramp voltage to produce a visual representation of the optical power spectrum.
The frequency response of optical spectrum analyzers tends to be relatively limited, e.g. (near-infrared), depending on the intended purpose, although (somewhat) wider-bandwidth general purpose instruments are available.
Vibration spectrum analyzer
A vibration spectrum analyzer allows to analyze vibration amplitudes at various component frequencies, In this way, vibration occurring at specific frequencies can be identified and tracked. Since particular machinery problems generate vibration at specific frequencies, machinery faults can be detected or diagnosed. Vibration Spectrum Analyzers use the signal from different types of sensor, such as: accelerometers, velocity transducers and proximity sensors. The uses of a vibration spectrum analyzer in machine condition monitoring allows to detect and identify machine faults such as: rotor imbalance, shaft misalignment, mechanical looseness, bearing defects, among others. Vibration analysis can also be used in structures to identify structural resonances or to perform modal analysis.
See also
Electrical measurements
Electromagnetic spectrum
Measuring receiver
Radio-frequency sweep
Spectral leakage
Spectral music
Radio spectrum scope
Stationary-wave integrated Fourier-transform spectrometry
References
Footnotes
External links
Sri Welaratna, "", Sound and Vibration (January 1997, 30th anniversary issue). A historical review of hardware spectrum-analyzer devices.
Electronic test equipment
Laboratory equipment
Radio technology
Signal processing
Spectroscopy
Scattering
Acoustics
Spectrum (physical sciences) | Spectrum analyzer | [
"Physics",
"Chemistry",
"Materials_science",
"Technology",
"Engineering"
] | 4,312 | [
"Physical phenomena",
"Computer engineering",
"Radio technology",
"Measuring instruments",
"Spectroscopy",
"Instrumental analysis",
"Scattering",
"Particle physics",
"Nuclear physics",
"Information and communications technology",
"Telecommunications engineering",
"Molecular physics",
"Spectr... |
313,418 | https://en.wikipedia.org/wiki/Luminous%20intensity | In photometry, luminous intensity is a measure of the wavelength-weighted power emitted by a light source in a particular direction per unit solid angle, based on the luminosity function, a standardized model of the sensitivity of the human eye. The SI unit of luminous intensity is the candela (cd), an SI base unit.
Measurement
Photometry deals with the measurement of visible light as perceived by human eyes. The human eye can only see light in the visible spectrum and has different sensitivities to light of different wavelengths within the spectrum. When adapted for bright conditions (photopic vision), the eye is most sensitive to yellow-green light at 555 nm. Light with the same radiant intensity at other wavelengths has a lower luminous intensity. The curve which represents the response of the human eye to light is a defined standard function or established by the International Commission on Illumination (CIE, for Commission Internationale de l'Éclairage) and standardized in collaboration with the ISO.
Luminous intensity of artificial light sources is typically measured using and a goniophotometer outfitted with a photometer or a spectroradiometer.
Relationship to other measures
Luminous intensity should not be confused with another photometric unit, luminous flux, which is the total perceived power emitted in all directions. Luminous intensity is the perceived power per unit solid angle. If a lamp has a 1 lumen bulb and the optics of the lamp are set up to focus the light evenly into a 1 steradian beam, then the beam would have a luminous intensity of 1 candela. If the optics were changed to concentrate the beam into 1/2 steradian then the source would have a luminous intensity of 2 candela. The resulting beam is narrower and brighter, though its luminous flux remains unchanged.
Luminous intensity is also not the same as the radiant intensity, the corresponding objective physical quantity used in the measurement science of radiometry.
Units
Like other SI base units, the candela has an operational definition—it is defined by the description of a physical process that will produce one candela of luminous intensity. By definition, if one constructs a light source that emits monochromatic green light with a frequency of 540 THz, and that has a radiant intensity of 1/683 watts per steradian in a given direction, that light source will emit one candela in the specified direction.
The frequency of light used in the definition corresponds to a wavelength in a vacuum of , which is near the peak of the eye's response to light. If the source emitted uniformly in all directions, the total radiant flux would be about , since there are 4 steradians in a sphere. A typical modern candle produces very roughly one candela while releasing heat at roughly .
Prior to the definition of the candela, a variety of units for luminous intensity were used in various countries. These were typically based on the brightness of the flame from a "standard candle" of defined composition, or the brightness of an incandescent filament of specific design. One of the best-known of these standards was the English standard: candlepower. One candlepower was the light produced by a pure spermaceti candle weighing one sixth of a pound and burning at a rate of 120 grains per hour. Germany, Austria, and Scandinavia used the Hefnerkerze, a unit based on the output of a Hefner lamp. In 1881, Jules Violle proposed the Violle as a unit of luminous intensity, and it was notable as the first unit of light intensity that did not depend on the properties of a particular lamp. All of these units were superseded by the definition of the candela.
Usage
The luminous intensity for monochromatic light of a particular wavelength is given by
where
is the luminous intensity in candelas (cd),
is the radiant intensity in watts per steradian (W/sr),
is the standard luminosity function.
If more than one wavelength is present (as is usually the case), one must sum or integrate over the spectrum of wavelengths present to get the luminous intensity:
See also
Brightness
International System of Quantities
Radiance
References
Curve data
Scalar physical quantities
SI base quantities
Photometry
Electromagnetic quantities | Luminous intensity | [
"Physics",
"Mathematics"
] | 862 | [
"Scalar physical quantities",
"Electromagnetic quantities",
"Physical quantities",
"SI base quantities",
"Quantity"
] |
313,454 | https://en.wikipedia.org/wiki/Wah-wah%20pedal | A wah-wah pedal, or simply wah pedal, is a type of effects pedal designed for electric guitar that alters the timbre of the input signal to create a distinctive sound, mimicking the human voice saying the onomatopoeic name "wah-wah". The pedal sweeps a band-pass filter up and down in frequency to create a spectral glide. The wah-wah effect originated in the 1920s, with trumpet or trombone players finding they could produce an expressive crying tone by moving a mute in, and out of the instrument's bell. This was later simulated with electronic circuitry for the electric guitar when the wah-wah pedal was invented. It is controlled by movement of the player's foot on a rocking pedal connected to a potentiometer. Wah-wah effects may be used without moving the treadle as a fixed filter to alter an instrument’s timbre (known as a “cocked-wah”), or to create a "wacka-wacka" funk-styled rhythm for rhythm guitar playing.
An auto-wah pedal uses an envelope follower to control the filter instead of a potentiometer.
History
The first wah pedal was created by Bradley J. Plunkett at Warwick Electronics Inc./Thomas Organ Company in November 1966. This pedal is the original prototype made from a transistorized MRB (mid-range boost) potentiometer bread-boarded circuit and the housing of a Vox Continental Organ volume pedal. The concept, however, was not new. Country guitar virtuoso Chet Atkins had used a similar, self-designed device on his late 1950s recordings of "Hot Toddy" and "Slinkey". Jazz guitarist Peter Van Wood had a modified Hammond organ expression
pedal; he recorded in 1955 a version of George Gershwin's "Summertime" with a "crying" tone, and other recordings including humorous "novelty" effects. A DeArmond Tone and Volume pedal was used in the early 1960s by Big Jim Sullivan, notably in some Krew Cats instrumental tracks, and in Dave Berry's song "The Crying Game".
The creation of the modern wah pedal was an accident which stemmed from the redesign of the Vox Super Beatle guitar amplifier in 1966. Warwick Electronics Inc. also owned Thomas Organ Company and had earlier entered into an agreement with Jennings Musical Instruments (JMI) of England for Thomas to distribute the Vox name and products in the United States. In addition to distributing the British-made Vox amplifiers, the Thomas Organ Company also designed and manufactured much of the Vox equipment sold in the US. The more highly regarded British Vox amplifiers were designed by Dick Denney and made by JMI, the parent company of Vox. Warwick assigned Thomas Organ Company to create a new product line of solid state Vox amplifiers called Vox Amplifonic Orchestra, which included the Super Beatle amplifier, named to capitalize on the Vox brand name's popularity in association with the Beatles, who used the JMI English Vox amplifiers such as the famous Vox AC30. The US-made Vox product line development was headed by musician and bandleader Bill Page. While creating the Vox Amplifonic Orchestra, the Thomas Organ Company decided to create an American-made equivalent of the British Vox amplifier but with transistorized (solid state) circuits, rather than vacuum tubes, which would be less expensive to manufacture. During the re-design of the USA Vox amplifier, Stan Cuttler, head engineer of Thomas Organ Company, assigned Brad Plunkett, a junior electronics engineer, to replace the expensive Jennings 3-position mid-range boost (MRB) circuit switch with a transistorized solid state MRB circuit.
Plunkett had lifted and bread-boarded a transistorized tone-circuit from the Thomas Organ (an electric solid state transistorized organ) to duplicate the Jennings 3-position circuit. After adjusting and testing the amplifier with an electronic oscillator and oscilloscope, Plunkett connected the output to the speaker and tested the circuit audibly. At that point, several engineers and technical consultants, including Bill Page and Del Casher, noticed the sound effect caused by the circuit. Page insisted on testing this bread-boarded circuit while he played his saxophone through an amplifier. John Glennon, an assistant junior electronics engineer with the Thomas Organ Company, was summoned to bring a volume control pedal which was used in the Vox Continental Organ so that the transistorized MRB potentiometer bread-boarded circuit could be installed in the pedal's housing. After the installation, Page began playing his saxophone through the pedal and asked Joe Banaron, CEO of Warwick Electronics Inc./Thomas Organ Company, to listen to the effect. At this point, the first electric guitar was plugged into the prototype wah pedal by guitarist Del Casher who suggested to Joe Banaron that this was a guitar effects pedal rather than a wind instrument effects pedal. Banaron, being a fan of the big band style of music, was interested in marketing the wah pedal for wind instruments as suggested by Page rather than for the electric guitar as suggested by Casher. After a remark by Casher to Banaron regarding the Harmon mute style of trumpet playing in the famous recording of "Sugar Blues" from the 1930s, Banaron decided to market the wah-wah pedal using Clyde McCoy's name for endorsement.
After the invention of the wah pedal, the prototype was modified by Casher and Plunkett to better accommodate the harmonic qualities of the electric guitar. However, since Vox had no intention of marketing the wah pedal for electric guitar players, the prototype wah-wah pedal was given to Del Casher for performances at Vox press conferences and film scores for Universal Pictures. The un-modified version of the Vox wah pedal was released to the public in February 1967 with an image of Clyde McCoy on the bottom of the pedal.
Warwick Electronics Inc. assigned Lester L. Kushner, an engineer with the Thomas Organ Company, and Brad Plunkett to write and submit the documentation for the wah-wah pedal patent. The patent application was submitted on 24 February 1967, which included technical diagrams of the pedal being connected to a four-stringed "guitar" (as noted from the "Description of the Preferred Embodiment"). Warwick Electronics Inc. was granted ("foot-controlled continuously variable preference circuit for musical instruments") on 22 September 1970.
Early versions of the Clyde McCoy featured an image of McCoy on the bottom panel, which soon gave way to only his signature. Thomas Organ then wanted the effect branded as their own for the American market, changing it to Cry Baby which was sold in parallel to the Italian Vox V846. Thomas Organ's failure to trademark the Cry Baby name soon led to the market being flooded with Cry Baby imitations from various parts of the world, including Italy, where all of the original Vox and Cry Babys were made. JEN, who had been responsible for the manufacture of Thomas Organ and Vox wah pedals, also made rebranded pedals for companies such as Fender and Gretsch and under their own JEN brand. When Thomas Organ moved production completely to Sepulveda, California and Chicago, Illinois these Italian models continued to be made and are among the more collectible wah pedals today.
Some of the most famous electric guitarists of the day were keen to adopt the wah-wah pedal soon after its release. Among the first recordings featuring wah-wah pedal were "Tales of Brave Ulysses" by Cream with Eric Clapton on guitar and "Burning of the Midnight Lamp" by the Jimi Hendrix Experience, both released in 1967. Hendrix also used wah wah on his famous song "Voodoo Child", in intro and in soloing. According to Del Casher, Hendrix learned about the pedal from Frank Zappa, another well-known early user. Clapton, in particular, used the device on many of the Cream songs included on their second and third albums, Disraeli Gears (1967) and Wheels of Fire (1968) respectively. Clapton would subsequently employ it again on "Wah-Wah", from his good friend George Harrison's solo album All Things Must Pass, upon the dissolution of The Beatles in 1970.
The wah-wah pedal increased in popularity in the following years, and was employed by guitarists such as Terry Kath of Chicago, Martin Barre of Jethro Tull, Jimmy Page of Led Zeppelin, and Tony Iommi of Black Sabbath. Kirk Hammett of Metallica would later use the pedal on many Metallica songs, most notably the guitar solo of Enter Sandman, earning him the nickname "Kirk Wahmett". David Gilmour of Pink Floyd used the pedal to create the "whale" effect during Echoes. He discovered this effect as a result of a roadie accidentally plugging his guitar into the output of the pedal and the input being plugged into his amp. The effect was first used during live performances of The Embryo during 1970 but was then switched into Echoes as it was being developed before being released on the Meddle album on 31 October 1971. Mick Ronson used a Cry Baby while recording The Rise and Fall of Ziggy Stardust and the Spiders from Mars. Michael Schenker also utilized the pedal in his work.
One of the most famous uses of this effect is heard on Isaac Hayes's "Theme from Shaft" (1971), with Charles Pitts (credited as Charles 'Skip' Pitts) playing the guitar.
In addition to rock music, many R&B artists have also used the wah-wah effect, including Lalo Schifrin on "Enter the Dragon" (1973), Johnny Pate on "Shaft in Africa" (1973) and James Brown on "Funky President" (1974). Funk band Kool & the Gang, B. T. Express, and Jimmy Castor Bunch used the wah-wah pedal also. Melvin Ragin, better known by the nickname Wah Wah Watson, was a member of the Motown Records studio band, The Funk Brothers, where he recorded with artists such as The Temptations on "Papa Was a Rollin' Stone", Marvin Gaye on "Let's Get It On", The Four Tops, Gladys Knight & the Pips, The Supremes, and The Undisputed Truth on "Smiling Faces Sometimes".
In the late 1980s, the wah-wah pedal was revived in the British music industry by John Squire of The Stone Roses, who bought a wah-wah pedal to differentiate his sound from other contemporary acts of the time. Afterwards, the wah-wah pedal would also be used by bands such as the Happy Mondays and the Charlatans, and became one of the defining sounds of British guitar music in the late '80s and early '90s.
Uses
The main use of the wah-wah pedal is by rocking the pedal up and down. By doing this motion, the pedal reacts by sweeping through the peak response of a frequency filter up and down in frequency to create a spectral glide.
A different function of the pedal is to use it in a fixed position, which changes how an instrument sounds by selecting a certain frequency range. A guitarist using the wah in this way selects a position on the pedal and leaves the pedal there. Depending on the position of the pedal, this will boost or cut a specific frequency. This can be used for emphasizing the "sweet spot" in the tonal spectrum of a particular instrument. One electric guitar player to use the pedal in this way was Jimi Hendrix, who revolutionized its application by combining a Fender Stratocaster with stacked Marshall Amplifiers (in both static and modulated mode) for lead and rhythm guitar applications unheard of before then.
Another famous style of wah-wah playing is utilizing it for a percussive "wacka-wacka" effect during rhythm guitar parts. This is done by muting strings, holding down a chord and moving the pedal at the same time. This was first heard on the song "Little Miss Lover" (1967) on "Axis: Bold as Love," by the Jimi Hendrix Experience.
The "wah-wah" and "wacka-wacka" effects are often associated with the bands on 1970s TV variety shows, like those of Sonny and Cher, Flip Wilson, or Donny and Marie Osmond; or with the soundtracks of pornographic films, the sound referenced in TV commercials for Axe body spray as "bow chicka wow wow."
Other instruments
Bass - A wah bass solo appears on "Mommy, What's a Funkadelic?", and all through "What Is Soul?" on Funkadelic's self-titled debut album (1970). Geezer Butler made extensive use of a wah pedal in the bass solo "Bassically" on Black Sabbath's debut self-titled album (1970). Michael Henderson used a wah wah pedal on Miles Davis album On the Corner (1972). Chris Squire of Yes used a wah-wah pedal on his solo piece "The Fish" on the album Fragile.
Trumpet - jazz/crossover records feature wind and brass instruments with the effect – Miles Davis's trumpet with a wah pedal was a well-known example.
Sax - Several of Frank Zappa’s sax players such as Bunk Gardner, Ian Underwood, and Napoleon Murphy Brock played saxophones amplified through a wah-wah pedal on some of Zappa's albums like Uncle Meat, Chunga's Revenge, and The Dub Room Special. David Sanborn can be heard playing an alto saxophone modified by a wah-wah pedal on the David Bowie album Young Americans (1975).
See also
Talk box
EQ
Auto-wah
References
Further information
Cry Baby: The Pedal that Rocks the World (documentary, 2011)
Audio engineering
Effects units
Tone, EQ and filter | Wah-wah pedal | [
"Engineering"
] | 2,804 | [
"Electrical engineering",
"Audio engineering"
] |
313,530 | https://en.wikipedia.org/wiki/Sperm%20whale | The sperm whale or cachalot (Physeter macrocephalus) is the largest of the toothed whales and the largest toothed predator. It is the only living member of the genus Physeter and one of three extant species in the sperm whale family, along with the pygmy sperm whale and dwarf sperm whale of the genus Kogia.
The sperm whale is a pelagic mammal with a worldwide range, and will migrate seasonally for feeding and breeding. Females and young males live together in groups, while mature males (bulls) live solitary lives outside of the mating season. The females cooperate to protect and nurse their young. Females give birth every four to twenty years, and care for the calves for more than a decade. A mature, healthy sperm whale has no natural predators, although calves and weakened adults are sometimes killed by pods of killer whales (orcas).
Mature males average in length, with the head representing up to one-third of the animal's length. Plunging to , it is the third deepest diving mammal, exceeded only by the southern elephant seal and Cuvier's beaked whale. The sperm whale uses echolocation and vocalization with source level as loud as 236 decibels (re 1 μPa m) underwater, the loudest of any animal. It has the largest brain on Earth, more than five times heavier than a human's. Sperm whales can live 70 years or more.
Sperm whales' heads are filled with a waxy substance called "spermaceti" (sperm oil), from which the whale derives its name. Spermaceti was a prime target of the whaling industry and was sought after for use in oil lamps, lubricants, and candles. Ambergris, a solid waxy waste product sometimes present in its digestive system, is still highly valued as a fixative in perfumes, among other uses. Beachcombers look out for ambergris as flotsam. Sperm whaling was a major industry in the 19th century, depicted in the novel Moby-Dick. The species is protected by the International Whaling Commission moratorium, and is listed as vulnerable by the International Union for Conservation of Nature.
Taxonomy and naming
Etymology
The name "sperm whale" is a clipping of "spermaceti whale". Spermaceti, originally mistakenly identified as the whales' semen, is the semi-liquid, waxy substance found within the whale's head.
(See "Spermaceti organ and melon" below.)
The sperm whale is also known as the "cachalot", which is thought to derive from the archaic French for 'tooth' or 'big teeth', as preserved for example in the word in the Gascon dialect (a word of either Romance
or Basque
origin).
The etymological dictionary of Corominas says the origin is uncertain, but it suggests that it comes from the Vulgar Latin 'sword hilts'. The word cachalot came to English via French from Spanish or Portuguese , perhaps from Galician/Portuguese 'big head'.
The term is retained in the Russian word for the animal, (), as well as in many other languages.
The scientific genus name Physeter comes from the Greek (), meaning 'blowpipe, blowhole (of a whale)', or – as a pars pro toto – 'whale'.
The specific name macrocephalus is Latinized from the Greek ( 'big-headed'), from () + ().
Its synonymous specific name catodon means 'down-tooth', from the Greek elements ('below') and ('tooth'); so named because it has visible teeth only in its lower jaw. (See "Jaws and teeth" below.)
Another synonym australasianus ('Australasian') was applied to sperm whales in the Southern Hemisphere.
Taxonomy
The sperm whale belongs to the order Cetartiodactyla, the order containing all cetaceans and even-toed ungulates. It is a member of the unranked clade Cetacea, with all the whales, dolphins, and porpoises, and further classified into Odontoceti, containing all the toothed whales and dolphins. It is the sole extant species of its genus, Physeter, in the family Physeteridae. Two species of the related extant genus Kogia, the pygmy sperm whale Kogia breviceps and the dwarf sperm whale K. sima, are placed either in this family or in the family Kogiidae. In some taxonomic schemes the families Kogiidae and Physeteridae are combined as the superfamily Physeteroidea (see the separate entry on the sperm whale family).
Swedish ichthyologist Peter Artedi described it as Physeter catodon in his 1738 work Genera piscium, from the report of a beached specimen in Orkney in 1693 and two beached in the Netherlands in 1598 and 1601. The 1598 specimen was near Berkhey.
The sperm whale is one of the species originally described by Carl Linnaeus in his landmark 1758 10th edition of Systema Naturae. He recognised four species in the genus Physeter. Experts soon realised that just one such species exists, although there has been debate about whether this should be named P. catodon or P. macrocephalus, two of the names used by Linnaeus. Both names are still used, although most recent authors now accept macrocephalus as the valid name, limiting catodon status to a lesser synonym. Until 1974, the species was generally known as P. catodon. In that year, however, Dutch zoologists Antonius M. Husson and Lipke Holthuis proposed that the correct name should be P. macrocephalus, the second name in the genus Physeter published by Linnaeus concurrently with P. catodon.
This proposition was based on the grounds that the names were synonyms published simultaneously, and, therefore, the ICZN Principle of the First Reviser should apply. In this instance, it led to the choice of P. macrocephalus over P. catodon, a view re-stated in Holthuis, 1987. This has been adopted by most subsequent authors, although Schevill (1986 and 1987) argued that macrocephalus was published with an inaccurate description and that therefore only the species catodon was valid, rendering the principle of "First Reviser" inapplicable. The most recent version of ITIS has altered its usage from P. catodon to P. macrocephalus, following L. B. Holthuis and more recent (2008) discussions with relevant experts. Furthermore, The Taxonomy Committee of the Society for Marine Mammalogy, the largest international association of marine mammal scientists in the world, officially uses Physeter macrocephalus when publishing their definitive list of marine mammal species.
Biology
External appearance
The sperm whale is the largest toothed whale and is among the most sexually dimorphic of all cetaceans. Both sexes are about the same size at birth, but mature males are typically 30% to 50% longer and three times as massive as females.
Newborn sperm whales are usually between long. Female sperm whales are sexually mature at in length, whilst males are sexually mature at . Female sperm whales are physically mature at about in length and generally do not achieve lengths greater than . The largest female sperm whale measured up to long, and an individual of such size would have weighed about . Male sperm whales are physically mature at about in length, and larger males can generally achieve . An long male sperm whale is estimated to have weighed . By contrast, the second largest toothed whale (Baird's beaked whale) measures up to and weighs up to .
There are occasional reports of individual sperm whales achieving even greater lengths, with some historical claims reaching or exceeding . One example is the whale that sank the Essex (one of the incidents behind Moby-Dick), which was claimed to be . However, there is disagreement as to the accuracy of some of these claims, which are often considered exaggerations or as being measured along the curves of the body.
An individual measuring was reported from a Soviet whaling fleet near the Kuril Islands in 1950 and is cited by some authors as the largest accurately measured. It has been estimated to weigh . In a review of size variation in marine megafauna, McClain and colleagues noted that the International Whaling Commission's data contained eight individuals larger than . The authors supported a male from the South Pacific in 1933 as the largest recorded. However, sizes like these are rare, with 95% of recorded sperm whales below 15.85 metres (52.0 ft).
In 1853, one sperm whale was reported at in length, with a head measuring . Large lower jawbones are held in the British Natural History Museum and the Oxford University Museum of Natural History, measuring and , respectively.
The average size of sperm whales has decreased over the years, probably due to pressure from whaling. Another view holds that exploitation by overwhaling had virtually no effect on the size of the bull sperm whales, and their size may have actually increased in current times on the basis of density dependent effects. Old males taken at Solander Islands were recorded to be extremely large and unusually rich in blubbers.
The sperm whale's unique body is unlikely to be confused with any other species. The sperm whale's distinctive shape comes from its very large, block-shaped head, which can be one-quarter to one-third of the animal's length. The S-shaped blowhole is located very close to the front of the head and shifted to the whale's left. This gives rise to a distinctive bushy, forward-angled spray.
The sperm whale's flukes (tail lobes) are triangular and very thick. Proportionally, they are larger than that of any other cetacean, and are very flexible. The whale lifts its flukes high out of the water as it begins a feeding dive. It has a series of ridges on the back's caudal third instead of a dorsal fin. The largest ridge was called the 'hump' by whalers, and can be mistaken for a dorsal fin because of its shape and size.
In contrast to the smooth skin of most large whales, its back skin is usually wrinkly and has been likened to a prune by whale-watching enthusiasts. Albinos have been reported.
Skeleton
The ribs are bound to the spine by flexible cartilage, which allows the ribcage to collapse rather than snap under high pressure. While sperm whales are well adapted to diving, repeated dives to great depths have long-term effects. Bones show the same avascular necrosis that signals decompression sickness in humans. Older skeletons showed the most extensive damage, whereas calves showed no damage. This damage may indicate that sperm whales are susceptible to decompression sickness, and sudden surfacing could be lethal to them.
Like that of all cetaceans, the spine of the sperm whale has reduced zygapophysial joints, of which the remnants are modified and are positioned higher on the vertebral dorsal spinous process, hugging it laterally, to prevent extensive lateral bending and facilitate more dorso-ventral bending. These evolutionary modifications make the spine more flexible but weaker than the spines of terrestrial vertebrates.
Like many cetaceans, the sperm whale has a vestigial pelvis that is not connected to the spine.
Like that of other toothed whales, the skull of the sperm whale is asymmetrical so as to aid echolocation. Sound waves that strike the whale from different directions will not be channeled in the same way. Within the basin of the cranium, the openings of the bony narial tubes (from which the nasal passages spring) are skewed towards the left side of the skull.
Jaws and teeth
The sperm whale's lower jaw is very narrow and underslung. The sperm whale has 18 to 26 teeth on each side of its lower jaw which fit into sockets in the upper jaw. The teeth are cone-shaped and weigh up to each. The teeth are functional, but do not appear to be necessary for capturing or eating squid, as well-fed animals have been found without teeth or even with deformed jaws. One hypothesis is that the teeth are used in aggression between males. Mature males often show scars which seem to be caused by the teeth. Rudimentary teeth are also present in the upper jaw, but these rarely emerge into the mouth. Analyzing the teeth is the preferred method for determining a whale's age. Like the age-rings in a tree, the teeth build distinct layers of cementum and dentine as they grow.
Brain
The sperm whale brain is the largest known of any modern or extinct animal, weighing on average about (with the smallest known weighing and the largest known weighing ), more than five times heavier than a human brain, and has a volume of about 8,000 cm3. Although larger brains generally correlate with higher intelligence, it is not the only factor. Elephants and dolphins also have larger brains than humans. The sperm whale has a lower encephalization quotient than many other whale and dolphin species, lower than that of non-human anthropoid apes, and much lower than that of humans.
The sperm whale's cerebrum is the largest in all mammalia, both in absolute and relative terms. The olfactory system is reduced, suggesting that the sperm whale has a poor sense of taste and smell. By contrast, the auditory system is enlarged. The pyramidal tract is poorly developed, reflecting the reduction of its limbs.
Biological systems
The sperm whale respiratory system has adapted to cope with drastic pressure changes when diving. The flexible ribcage allows lung collapse, reducing nitrogen intake, and metabolism can decrease to conserve oxygen. Between dives, the sperm whale surfaces to breathe for about eight minutes before diving again. Odontoceti (toothed whales) breathe air at the surface through a single, S-shaped blowhole, which is extremely skewed to the left. Sperm whales spout (breathe) 3–5 times per minute at rest, increasing to 6–7 times per minute after a dive. The blow is a noisy, single stream that rises up to or more above the surface and points forward and left at a 45° angle. On average, females and juveniles blow every 12.5 seconds before dives, while large males blow every 17.5 seconds before dives. A sperm whale killed south of Durban, South Africa, after a 1-hour, 50-minute dive was found with two dogfish (Scymnodon sp.), usually found at the sea floor, in its belly.
The sperm whale has the longest intestinal system in the world, exceeding 300 m in larger specimens. The sperm whale has a four-chambered stomach that is similar to ruminants. The first secretes no gastric juices and has very thick muscular walls to crush the food (since whales cannot chew) and resist the claw and sucker attacks of swallowed squid. The second chamber is larger and is where digestion takes place. Undigested squid beaks accumulate in the second chamber – as many as 18,000 have been found in some dissected specimens. Most squid beaks are vomited by the whale, but some occasionally make it to the hindgut. Such beaks precipitate the formation of ambergris.
In 1959, the heart of a 22 metric-ton (24 short-ton) male taken by whalers was measured to be , about 0.5% of its total mass. The circulatory system has a number of specific adaptations for the aquatic environment. The diameter of the aortic arch increases as it leaves the heart. This bulbous expansion acts as a windkessel, ensuring a steady blood flow as the heart rate slows during diving. The arteries that leave the aortic arch are positioned symmetrically. There is no costocervical artery. There is no direct connection between the internal carotid artery and the vessels of the brain. Their circulatory system has adapted to dive at great depths, as much as for up to 120 minutes. More typical dives are around and 35 minutes in duration. Myoglobin, which stores oxygen in muscle tissue, is much more abundant than in terrestrial animals. The blood has a high density of red blood cells, which contain oxygen-carrying haemoglobin. The oxygenated blood can be directed towards only the brain and other essential organs when oxygen levels deplete. The spermaceti organ may also play a role by adjusting buoyancy (see below). The arterial retia mirabilia are extraordinarily well-developed. The complex arterial retia mirabilia of the sperm whale are more extensive and larger than those of any other cetacean.
Senses
Spermaceti organ and melon
Atop the whale's skull is positioned a large complex of organs filled with a liquid mixture of fats and waxes called spermaceti. The purpose of this complex is to generate powerful and focused clicking sounds, the existence of which was proven by Valentine Worthington and William Schevill when a recording was produced on a research vessel in May 1959. The sperm whale uses these sounds for echolocation and communication.
The spermaceti organ is like a large barrel of spermaceti. Its surrounding wall, known as the case, is extremely tough and fibrous. The case can hold within it up to 1,900 litres of spermaceti. It is proportionately larger in males. This oil is a mixture of triglycerides and wax esters. It has been suggested that it is homologous to the dorsal bursa organ found in dolphins. The proportion of wax esters in the spermaceti organ increases with the age of the whale: 38–51% in calves, 58–87% in adult females, and 71–94% in adult males. The spermaceti at the core of the organ has a higher wax content than the outer areas. The speed of sound in spermaceti is 2,684 m/s (at 40 kHz, 36 °C), making it nearly twice as fast as in the oil in a dolphin's melon.
Below the spermaceti organ lies the "junk" which consists of compartments of spermaceti separated by cartilage. It is analogous to the melon found in other toothed whales. The structure of the junk redistributes physical stress across the skull and may have evolved to protect the head during ramming.
Running through the head are two air passages. The left passage runs alongside the spermaceti organ and goes directly to the blowhole, whilst the right passage runs underneath the spermaceti organ and passes air through a pair of phonic lips and into the distal sac at the very front of the nose. The distal sac is connected to the blowhole and the terminus of the left passage. When the whale is submerged, it can close the blowhole, and air that passes through the phonic lips can circulate back to the lungs. The sperm whale, unlike other odontocetes, has only one pair of phonic lips, whereas all other toothed whales have two, and it is located at the front of the nose instead of behind the melon.
At the posterior end of this spermaceti complex is the frontal sac, which covers the concave surface of the cranium. The posterior wall of the frontal sac is covered with fluid-filled knobs, which are about 4–13 mm in diameter and separated by narrow grooves. The anterior wall is smooth. The knobbly surface reflects sound waves that come through the spermaceti organ from the phonic lips. The grooves between the knobs trap a film of air that is consistent whatever the orientation or depth of the whale, making it an excellent sound mirror.
The spermaceti organs may also help adjust the whale's buoyancy. It is hypothesized that before the whale dives, cold water enters the organ, and it is likely that the blood vessels constrict, reducing blood flow, and, hence, temperature. The wax therefore solidifies and reduces in volume. The increase in specific density generates a down force of about and allows the whale to dive with less effort. During the hunt, oxygen consumption, together with blood vessel dilation, produces heat and melts the spermaceti, increasing its buoyancy and enabling easy surfacing. However, more recent work has found many problems with this theory including the lack of anatomical structures for the actual heat exchange. Another issue is that if the spermaceti does indeed cool and solidify, it would affect the whale's echolocation ability just when it needs it to hunt in the depths.
Herman Melville's fictional story Moby-Dick suggests that the "case" containing the spermaceti serves as a battering ram for use in fights between males. A few famous instances include the well-documented sinking of the ships Essex and Ann Alexander by attackers estimated to weigh only one-fifth as much as the ships.
Eyes and vision
The sperm whale's eye does not differ greatly from those of other toothed whales except in size. It is the largest among the toothed whales, weighing about 170 g. It is overall ellipsoid in shape, compressed along the visual axis, measuring about 7×7×3 cm. The cornea is elliptical and the lens is spherical. The sclera is very hard and thick, roughly 1 cm anteriorly and 3 cm posteriorly. There are no ciliary muscles. The choroid is very thick and contains a fibrous tapetum lucidum. Like other toothed whales, the sperm whale can retract and protrude its eyes, thanks to a 2-cm-thick retractor muscle attached around the eye at the equator, but are unable to roll the eyes in their sockets.
According to Fristrup and Harbison (2002),
sperm whale's eyes afford good vision and sensitivity to light. They conjectured that sperm whales use vision to hunt squid, either by detecting silhouettes from below or by detecting bioluminescence. If sperm whales detect silhouettes, Fristrup and Harbison suggested that they hunt upside down, allowing them to use the forward parts of the ventral visual fields for binocular vision.
Sleeping
For some time researchers have been aware that pods of sperm whales may sleep for short periods, assuming a vertical position with their heads just below or at the surface, or head down. A 2008 study published in Current Biology recorded evidence that whales may sleep with both sides of the brain. It appears that some whales may fall into a deep sleep for about 7 percent of the time, most often between 6 p.m. and midnight.
Genetics
Sperm whales have 21 pairs of chromosomes (2n=42). The genome of live whales can be examined by recovering shed skin.
Vocalization complex
After Valentine Worthington and William E. Schevill confirmed the existence of sperm whale vocalization, further studies found that sperm whales are capable of emitting sounds at a source level of 230 decibels–making the sperm whale the loudest animal in the world.
Mechanism
When echolocating, the sperm whale emits a directionally focused beam of broadband clicks. Clicks are generated by forcing air through a pair of phonic lips (also known as "monkey lips" or "") at the front end of the nose, just below the blowhole. The sound then travels backwards along the length of the nose through the spermaceti organ. Most of the sound energy is then reflected off the frontal sac at the cranium and into the melon, whose lens-like structure focuses it. Some of the sound will reflect back into the spermaceti organ and back towards the front of the whale's nose, where it will be reflected through the spermaceti organ a third time. This back and forth reflection which happens on the scale of a few milliseconds creates a multi-pulse click structure.
This multi-pulse click structure allows researchers to measure the whale's spermaceti organ using only the sound of its clicks. Because the interval between pulses of a sperm whale's click is related to the length of the sound producing organ, an individual whale's click is unique to that individual. However, if the whale matures and the size of the spermaceti organ increases, the tone of the whale's click will also change. The lower jaw is the primary reception path for the echoes. A continuous fat-filled canal transmits received sounds to the inner ear.
The source of the air forced through the phonic lips is the right nasal passage. While the left nasal passage opens to the blow hole, the right nasal passage has evolved to supply air to the phonic lips. It is thought that the nostrils of the land-based ancestor of the sperm whale migrated through evolution to their current functions, the left nostril becoming the blowhole and the right nostril becoming the phonic lips.
Air that passes through the phonic lips passes into the distal sac, then back down through the left nasal passage. This recycling of air allows the whale to continuously generate clicks for as long as it is submerged.
Vocalization types
The sperm whale's vocalizations are all based on clicking, described in four types: the usual echolocation, creaks, codas, and slow clicks.
The usual echolocation click type is used in searching for prey. A creak is a rapid series of high-frequency clicks that sounds somewhat like a creaky door hinge. It is typically used when homing in on prey.
Slow clicks are heard only in the presence of males (it is not certain whether females occasionally make them). Males make a lot of slow clicks in breeding grounds (74% of the time), both near the surface and at depth, which suggests they are primarily mating signals. Outside breeding grounds, slow clicks are rarely heard, and usually near the surface.
Codas
The most distinctive vocalizations are codas, which are short rhythmic sequences of clicks, mostly numbering 3–12 clicks, in stereotyped patterns. They are classified using variations in the number of clicks, rhythm, and tempo.
Codas are the result of vocal learning within a stable social group, and are made in the context of the whales' social unit. "The foundation of sperm whale society is the matrilineally based social unit of ten or so females and their offspring. The members of the unit travel together, suckle each others' infants, and babysit them while mothers make long deep dives to feed." Over 70% of a sperm whale's time is spent independently foraging; codas "could help whales reunite and reaffirm their social ties in between long foraging dives."
While nonidentity codas are commonly used in multiple different clans, some codas express clan identity, and denote different patterns of travel, foraging, and socializing or avoidance among clans. In particular, whales will not group with whales of another clan even though they share the same geographical area. Statistically, as the clans' ranges become more overlapped, the distinction in clan identity coda usage becomes more pronounced. Distinctive codas identify seven clans described among the approximately 150,000 female sperm whales in the Pacific Ocean, and there are another four clans in the Atlantic. As "arbitrary traits that function as reliable indicators of cultural group membership," clan identity codas act as symbolic markers that modulate interactions between individuals.
Individual identity in sperm whale vocalizations is an ongoing scientific issue, however. A distinction needs to be made between cues and signals. Human acoustic tools can distinguish individual whales by analyzing micro-characteristics of their vocalizations, and the whales can probably do the same. This does not prove that the whales deliberately use some vocalizations to signal individual identity in the manner of the signature whistles that bottlenose dolphins use as individual labels.
Ecology
Distribution
Sperm whales are among the most cosmopolitan species. They prefer ice-free waters over deep. Although both sexes range through temperate and tropical oceans and seas, only adult males populate higher latitudes. Among several regions, such as along coastal waters of southern Australia, sperm whales have been considered to be locally extinct.
They are relatively abundant from the poles to the equator and are found in all the oceans. They inhabit the Mediterranean Sea, but not the Black Sea, while their presence in the Red Sea is uncertain. The shallow entrances to both the Black Sea and the Red Sea may account for their absence. The Black Sea's lower layers are also anoxic and contain high concentrations of sulphur compounds such as hydrogen sulphide. The first ever sighting off the coast of Pakistan was made in 2017. The first ever record off the west coast of the Korean Peninsula (Yellow Sea) was made in 2005. followed by one near Ganghwa Island in 2009.
Populations are denser close to continental shelves and canyons. Sperm whales are usually found in deep, off-shore waters, but may be seen closer to shore, in areas where the continental shelf is small and drops quickly to depths of . Coastal areas with significant sperm whale populations include the Azores and Dominica. In east Asian waters, whales are also observed regularly in coastal waters in places such as the Commander and Kuril Islands, Shiretoko Peninsula which is one of few locations where sperm whales can be observed from shores, off Kinkasan, vicinity to Tokyo Bay and the Bōsō Peninsula to the Izu and the Izu Islands, the Volcano Islands, Yakushima and the Tokara Islands to the Ryukyu Islands, Taiwan, the Northern Mariana Islands, and so forth. Historical catch records suggest there could have been smaller aggression grounds in the Sea of Japan as well. Along the Korean Peninsula, the first confirmed observation within the Sea of Japan, eight animals off Guryongpo, was made in 2004 since after the last catches of five whales off Ulsan in 1911, while nine whales were observed in the East China Sea side of the peninsula in 1999.
Grown males are known to enter surprisingly shallow bays to rest (whales will be in a state of rest during these occasions). Unique, coastal groups have been reported from various areas around the globe, such as near Scotland's coastal waters, and the Shiretoko Peninsula, off Kaikōura, in Davao Gulf. Such coastal groups were more abundant in pre-whaling days.
Genetic analysis indicates that the world population of sperm whales originated in the Pacific Ocean from a population of about 10,000 animals around 100,000 years ago, when expanding ice caps blocked off their access to other seas. In particular, colonization of the Atlantic was revealed to have occurred multiple times during this expansion of their range.
Diet
Sperm whales usually dive between , and sometimes , in search of food. Such dives can last more than an hour. They feed on several species, notably the giant squid, but also the colossal squid, octopuses, and fish such as demersal rays and sharks, but their diet is mainly medium-sized squid. Sperm whales may also possibly prey upon swordfish on rare occasions. Some prey may be taken accidentally while eating other items. Most of what is known about deep-sea squid has been learned from specimens in captured sperm whale stomachs, although more recent studies analysed faeces.
One study, carried out around the Galápagos, found that squid from the genera Histioteuthis (62%), Ancistrocheirus (16%), and Octopoteuthis (7%) weighing between were the most commonly taken. Battles between sperm whales and giant squid or colossal squid have never been observed by humans; however, white scars are believed to be caused by the large squid. One study published in 2010 collected evidence that suggests that female sperm whales may collaborate when hunting Humboldt squid. Tagging studies have shown that sperm whales hunt upside down at the bottom of their deep dives. It is suggested that the whales can see the squid silhouetted above them against the dim surface light.
An older study, examining whales captured by the New Zealand whaling fleet in the Cook Strait region, found a 1.69:1 ratio of squid to fish by weight. Sperm whales sometimes take sablefish and toothfish from long lines. Long-line fishing operations in the Gulf of Alaska complain that sperm whales take advantage of their fishing operations to eat desirable species straight off the line, sparing the whales the need to hunt. However, the amount of fish taken is very little compared to what the sperm whale needs per day. Video footage has been captured of a large male sperm whale "bouncing" a long line, to gain the fish. Sperm whales are believed to prey on the megamouth shark, a rare and large deep-sea species discovered in the 1970s. In one case, three sperm whales were observed attacking or playing with a megamouth.
Sperm whales have also been noted to feed on bioluminescent pyrosomes such as Pyrosoma atlanticum. It is thought that the foraging strategy of sperm whales for bioluminescent squids may also explain the presence of these light-emitting pyrosomes in the diet of the sperm whale.
The sharp beak of a consumed squid lodged in the whale's intestine may lead to the production of ambergris, analogous to the production of pearls in oysters. The irritation of the intestines caused by squid beaks stimulates the secretion of this lubricant-like substance. Sperm whales are prodigious feeders and eat around 3% of their body weight per day. The total annual consumption of prey by sperm whales worldwide is estimated to be about . In comparison, human consumption of seafood is estimated to be .
Sperm whales hunt through echolocation. Their clicks are among the most powerful sounds in the animal kingdom (see above). It has been hypothesised that it can stun prey with its clicks. Experimental studies attempting to duplicate this effect have been unable to replicate the supposed injuries, casting doubt on this idea. One study showing that sound pressure levels on the squid are more than an order of magnitude below levels required for debilitation, and therefore, precluding acoustic
stunning to facilitate prey capture.
Sperm whales, as well as other large cetaceans, help fertilise the surface of the ocean by consuming nutrients in the depths and transporting those nutrients to the oceans' surface when they defecate, an effect known as the whale pump. This fertilises phytoplankton and other plants on the surface of the ocean and contributes to ocean productivity and the drawdown of atmospheric carbon.
Life cycle
Sperm whales can live 70 years or more. They are a prime example of a species that has been K-selected, meaning their reproductive strategy is associated with stable environmental conditions and comprises a low birth rate, significant parental aid to offspring, slow maturation, and high longevity.
How they choose mates has not been definitively determined. Bulls will fight with each other over females, and males will mate with multiple females, making them polygynous, but they do not dominate the group as in a harem. Bulls do not provide paternal care to their offspring but rather play a fatherly role to younger bulls to show dominance.
Females become fertile at around 9 years of age. The oldest pregnant female ever recorded was 41 years old. Gestation requires 14 to 16 months, producing a single calf. Sexually mature females give birth once every 4 to 20 years (pregnancy rates were higher during the whaling era). Birth is a social event, as the mother and calf need others to protect them from predators. The other adults may jostle and bite the newborn in its first hours.
Lactation proceeds for 19 to 42 months, but calves, rarely, may suckle up to 13 years. Like that of other whales, the sperm whale's milk has a higher fat content than that of terrestrial mammals: about 36%, compared to 4% in cow milk. This gives it a consistency similar to cottage cheese, which prevents it from dissolving in the water before the calf can drink it. It has an energy content of roughly 3,840 kcal/kg, compared to just 640 kcal/kg in cow milk. Calves may be allowed to suckle from females other than their mothers.
Males become sexually mature at 18 years. Upon reaching sexual maturity, males move to higher latitudes, where the water is colder and feeding is more productive. Females remain at lower latitudes. Males reach their full size at about age 50.
Social behaviour
Relations within the species
Like elephants, females and their young live in matriarchal groups called pods, while bulls live apart. Bulls sometimes form loose bachelor groups with other males of similar age and size. As they grow older, they typically live solitary lives, only returning to the pod to socialize or to breed. Bulls have beached themselves together, suggesting a degree of cooperation which is not yet fully understood. The whales rarely, if ever, leave their group.
A social unit is a group of sperm whales who live and travel together over a period of years. Individuals rarely, if ever, join or leave a social unit. There is a huge variance in the size of social units. They are most commonly between six and nine individuals in size but can have more than twenty. Unlike orcas, sperm whales within a social unit show no significant tendency to associate with their genetic relatives. Females and calves spend about three-quarters of their time foraging and a quarter of their time socializing. Socializing usually takes place in the afternoon.
When sperm whales socialize, they emit complex patterns of clicks called codas. They will spend much of the time rubbing against each other. Tracking of diving whales suggests that groups engage in herding of prey, similar to bait balls created by other species, though the research needs to be confirmed by tracking the prey.
Relations with other species
The most common natural predator of sperm whales is the orca (killer whale), but pilot whales and false killer whales sometimes harass them. Orcas prey on target groups of females with young, usually making an effort to extract and kill a calf. The females will protect their calves or an injured adult by encircling them. They may face inwards with their tails out (the 'marguerite formation', named after the flower). The heavy and powerful tail of an adult whale is potentially capable of delivering lethal blows. Alternatively, they may face outwards (the 'heads-out formation'). Other than sperm whales, southern right whales had been observed to perform similar formations. However, formations in non-dangerous situations have been recorded as well. Early whalers exploited this behaviour, attracting a whole unit by injuring one of its members. Such a tactic is described in Moby-Dick: "Say you strike a Forty-barrel-bull—poor devil! all his comrades quit him. But strike a member of the harem school, and her companions swim around her with every token of concern, sometimes lingering so near her and so long, as themselves to fall a prey."If the killer whale pod is large, its members may sometimes be able to kill adult female sperm whales and can at least injure an entire pod of sperm whales. Bulls have no predators, and are believed to be too large, powerful and aggressive to be threatened by killer whales. Solitary bulls are known to interfere and come to the aid of vulnerable groups nearby. However, the bull sperm whale, when accompanying pods of female sperm whales and their calves as such, may be reportedly unable to effectively dissuade killer whales from their attacks on the group, although the killer whales may end the attack sooner when a bull is present.
However, male sperm whales have been observed to attack and intimidate killer whale pods in competitive feeding instances. An incident was filmed from a long-line trawler: a killer whale pod was systematically taking fish caught on the trawler's long lines (as the lines were being pulled into the ship) when a male sperm whale appeared to repeatedly charge the killer whale pod in an attempt to drive them away; it was speculated by the film crew that the sperm whale was attempting to access the same fish. The killer whales employed a tail outward and tail-slapping defensive position against the bull sperm whale similar to that used by female sperm whales against attacking killer whales. However, at some potential feeding sites, the killer whales may prevail over sperm whales even when outnumbered by the sperm whales. Some authors consider the killer whales "usually" behaviorally dominant over sperm whales but express that the two species are "fairly evenly matched", with the killer whales' greater aggression, more considerable biting force for their size and predatory prowess more than compensating for their smaller size.
Sperm whales are not known for forging bonds with other species, but it was observed that a bottlenose dolphin with a spinal deformity had been accepted into a pod of sperm whales. They are known to swim alongside other cetaceans such as humpback, fin, minke, pilot, and killer whales on occasion.
Parasites
Sperm whales can suffer from parasites.
Out of 35 sperm whales caught during the 1976–1977 Antarctic whaling season, all of them were infected by Anisakis physeteris (in their stomachs) and Phyllobothrium delphini (in their blubber).
Both whales with a placenta were infected with Placentonema gigantissima, potentially the largest nematode worm ever described.
Evolutionary history
Fossil record
Although the fossil record is poor, several extinct genera have been assigned to the clade Physeteroidea, which includes the last common ancestor of the modern sperm whale, pygmy sperm whales, dwarf sperm whales, and extinct physeteroids. These fossils include Ferecetotherium, Idiorophus, Diaphorocetus, Aulophyseter, Orycterocetus, Scaldicetus, Placoziphius, Zygophyseter and Acrophyseter. Ferecetotherium, found in Azerbaijan and dated to the late Oligocene (about ), is the most primitive fossil that has been found, which possesses sperm whale-specific features, such as an asymmetric rostrum ("beak" or "snout"). Most sperm whale fossils date from the Miocene period, . Diaphorocetus, from Argentina, has been dated to the early Miocene. Fossil sperm whales from the Middle Miocene include Aulophyseter, Idiorophus and Orycterocetus, all of which were found on the West Coast of the United States, and Scaldicetus, found in Europe and Japan. Orycterocetus fossils have also been found in the North Atlantic Ocean and the Mediterranean Sea, in addition to the west coast of the United States. Placoziphius, found in Europe, and Acrophyseter, from Peru, are dated to the late Miocene.
Fossil sperm whales differ from modern sperm whales in tooth count and the shape of the face and jaws. For example, Scaldicetus had a tapered rostrum. Genera from the Oligocene and early and middle Miocene, with the possible exception of Aulophyseter, had teeth in their upper jaws. Acrophyseter, from the late Miocene, also had teeth in both the upper and lower jaws as well as a short rostrum and an upward curving mandible (lower jaw). These anatomical differences suggest that fossil species may not have necessarily been deep-sea squid eaters such as the modern sperm whale, but that some genera mainly ate fish. Zygophyseter, dated from the middle to late Miocene and found in southern Italy, had teeth in both jaws and appears to have been adapted to feed on large prey, rather like the modern killer whale (orca). Other fossil sperm whales with adaptations similar to this are collectively known as killer sperm whales.
Two poorly known fossil species belonging to the modern genus Physeter have been recognized so far: P. antiquus (Neogene of France) and P. vetus (Neogene of eastern North America). Physeter vetus is very likely an invalid species, as the few teeth that were used to identify this species appear to be identical to those of another toothed whale, Orycterocetus quadratidens.
Phylogeny
The traditional view has been that Mysticeti (baleen whales) and Odontoceti (toothed whales) arose from more primitive whales early in the Oligocene period, and that the super-family Physeteroidea, which contains the sperm whale, dwarf sperm whale, and pygmy sperm whale, diverged from other toothed whales soon after that, over . From 1993 to 1996, molecular phylogenetics analyses by Milinkovitch and colleagues, based on comparing the genes of various modern whales, suggested that the sperm whales are more closely related to the baleen whales than they are to other toothed whales, which would have meant that Odontoceti were not monophyletic; in other words, it did not consist of a single ancestral toothed whale species and all its descendants. However, more recent studies, based on various combinations of comparative anatomy and molecular phylogenetics, criticised Milinkovitch's analysis on technical grounds and reaffirmed that the Odontoceti are monophyletic.
These analyses also confirm that there was a rapid evolutionary radiation (diversification) of the Physeteroidea in the Miocene period. The Kogiidae (dwarf and pygmy sperm whales) diverged from the Physeteridae (true sperm whales) at least .
Usage by humans
Sperm whaling
Spermaceti, obtained primarily from the spermaceti organ, and sperm oil, obtained primarily from the blubber in the body, were much sought after by 18th, 19th, and 20th century whalers. These substances found a variety of commercial applications, such as candles, soap, cosmetics, machine oil, other specialised lubricants, lamp oil, pencils, crayons, leather waterproofing, rust-proofing materials and many pharmaceutical compounds. Ambergris, a highly expensive, solid, waxy, flammable substance produced in the digestive system of sperm whales, was also sought as a fixative in perfumery.
Prior to the early eighteenth century, hunting was mostly by indigenous Indonesians. Legend has it that sometime in the early 18th century, around 1712, Captain Christopher Hussey, while cruising for right whales near shore, was blown offshore by a northerly wind, where he encountered a sperm whale pod and killed one. Although the story may not be true, sperm whales were indeed soon exploited by American whalers. Judge Paul Dudley, in his Essay upon the Natural History of Whales (1725), states that a certain Atkins, 10 or 12 years in the trade, was among the first to catch sperm whales sometime around 1720 off the New England coast.
There were only a few recorded instances during the first few decades (1709–1730s) of offshore sperm whaling. Instead, sloops concentrated on the Nantucket Shoals, where they would have taken right whales or went to the Davis Strait region to catch bowhead whales. By the early 1740s, with the advent of spermaceti candles (before 1743), American vessels began to focus on sperm whales. The diary of Benjamin Bangs (1721–1769) shows that, along with the bumpkin sloop he sailed, he found three other sloops flensing sperm whales off the coast of North Carolina in late May 1743. On returning to Nantucket in the summer 1744 on a subsequent voyage, he noted that "45 spermacetes are brought in here this day," another indication that American sperm whaling was in full swing.
American sperm whaling soon spread from the east coast of the American colonies to the Gulf Stream, the Grand Banks, West Africa (1763), the Azores (1765), and the South Atlantic (1770s). From 1770 to 1775 Massachusetts, New York, Connecticut, and Rhode Island ports produced 45,000 barrels of sperm oil annually, compared to 8,500 of whale oil. In the same decade, the British began sperm whaling, employing American ships and personnel. By the following decade, the French had entered the trade, also employing American expertise. Sperm whaling increased until the mid-nineteenth century. Spermaceti oil was important in public lighting (for example, in lighthouses, where it was used in the United States until 1862, when it was replaced by lard oil, in turn replaced by petroleum) and for lubricating the machines (such as those used in cotton mills) of the Industrial Revolution. Sperm whaling declined in the second half of the nineteenth century, as petroleum came into broader use. In that sense, petroleum use may be said to have protected whale populations from even greater exploitation. Sperm whaling in the 18th century began with small sloops carrying only one or two whaleboats. The fleet's scope and size increased over time, and larger ships entered the fishery. In the late 18th century and early 19th century, sperm whaling ships sailed to the equatorial Pacific, the Indian Ocean, Japan, the coast of Arabia, Australia and New Zealand. Hunting could be dangerous to the crew, since sperm whales (especially bulls) will readily fight to defend themselves against attack, unlike most baleen whales. When dealing with a threat, sperm whales will use their huge head effectively as a battering ram. Arguably the most famous sperm whale counter-attack occurred on 20 November 1820, when a whale claimed to be about long rammed and sank the Nantucket whaleship Essex. Only 8 out of 21 sailors survived to be rescued by other ships.
The sperm whale's ivory-like teeth were often sought by 18th- and 19th-century whalers, who used them to produce inked carvings known as scrimshaw. 30 teeth of the sperm whale can be used for ivory. Each of these teeth, up to and across, are hollow for the first half of their length. Like walrus ivory, sperm whale ivory has two distinct layers. However, sperm whale ivory contains a much thicker inner layer. Though a widely practised art in the 19th century, scrimshaw using genuine sperm whale ivory declined substantially after the retirement of the whaling fleets in the 1880s.
Modern whaling was more efficient than open-boat whaling, employing steam-powered ships and exploding harpoons. Initially, modern whaling activity focused on large baleen whales, but as these populations were taken, sperm whaling increased. Spermaceti, the fine waxy oil produced by sperm whales, was in high demand. In both the 1941–1942 and 1942–1943 seasons, Norwegian expeditions took over 3,000 sperm whales off the coast of Peru alone. After World War II, whaling continued unabated to obtain oil for cosmetics and high-performance machinery, such as automobile transmissions.
The hunting led to the near-extinction of large whales, including sperm whales, until bans on whale oil use were instituted in 1972. The International Whaling Commission gave the species full protection in 1985, but hunting by Japan in the northern Pacific Ocean continued until 1988.
It is estimated that the historic worldwide population numbered 1,100,000 before commercial sperm whaling began in the early 18th century. By 1880, it had declined by an estimated 29 percent. From that date until 1946, the population appears to have partially recovered as whaling activity decreased, but after the Second World War, the population declined even further, to 33 per cent of the pre-whaling population. Between 184,000 and 236,000 sperm whales were killed by the various whaling nations in the 19th century, while in the 20th century, at least 770,000 were taken, the majority between 1946 and 1980.
Sperm whales increase levels of primary production and carbon export by depositing iron-rich faeces into surface waters of the Southern Ocean. The iron-rich faeces cause phytoplankton to grow and take up more carbon from the atmosphere. When the phytoplankton dies, it sinks to the deep ocean and takes the atmospheric carbon with it. By reducing the abundance of sperm whales in the Southern Ocean, whaling has resulted in an extra 2 million tonnes of carbon remaining in the atmosphere each year.
Remaining sperm whale populations are large enough that the species' conservation status is rated as vulnerable rather than endangered. However, the recovery from centuries of commercial whaling is a slow process, particularly in the South Pacific, where the toll on breeding-age males was severe.
Conservation status
The total number of sperm whales in the world is unknown, but is thought to be in the hundreds of thousands. The conservation outlook is brighter than for many other whales. Commercial whaling has ceased, and the species is protected almost worldwide, though records indicate that in the 11-year period starting from 2000, Japanese vessels have caught 51 sperm whales. Fishermen do not target sperm whales to eat, but long-line fishing operations in the Gulf of Alaska have complained about sperm whales "stealing" fish from their lines.
Since the 2000s , entanglement in fishing nets and collisions with ships represent the greatest threats to the sperm whale population. Other threats include ingestion of marine debris, ocean noise, and chemical pollution. The International Union for Conservation of Nature (IUCN) regards the sperm whale as being "vulnerable". The species is listed as endangered on the United States Endangered Species Act.
Sperm whales are listed on Appendix I and Appendix II of the Convention on the Conservation of Migratory Species of Wild Animals (CMS). It is listed on Appendix I as this species has been categorized as being in danger of extinction throughout all or a significant proportion of their range and CMS Parties strive towards strictly protecting these animals, conserving or restoring the places where they live, mitigating obstacles to migration and controlling other factors that might endanger them. It is listed on Appendix II as it has an unfavourable conservation status or would benefit significantly from international co-operation organised by tailored agreements. It is also covered by the Agreement on the Conservation of Cetaceans in the Black Sea, Mediterranean Sea and Contiguous Atlantic Area (ACCOBAMS) and the Memorandum of Understanding for the Conservation of Cetaceans and Their Habitats in the Pacific Islands Region (Pacific Cetaceans MOU).
The species is protected under Appendix I of the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). This makes commercial international trade (including in parts and derivatives) prohibited, with all other international trade strictly regulated through a system of permits and certificates.
Cultural importance
Rope-mounted teeth are important cultural objects throughout the Pacific. In New Zealand, the Māori know them as "rei puta"; such whale tooth pendants were rare objects because sperm whales were not actively hunted in traditional Māori society. Whale ivory and bone were taken from beached whales. In Fiji the teeth are known as tabua, traditionally given as gifts for atonement or esteem (called sevusevu), and were important in negotiations between rival chiefs. Friedrich Ratzel in The History of Mankind reported in 1896 that, in Fiji, whales' or cachalots' teeth were the most-demanded article of ornament or value. They occurred often in necklaces. Today the tabua remains an important item in Fijian life. The teeth were originally rare in Fiji and Tonga, which exported teeth, but with the Europeans' arrival, teeth flooded the market and this "currency" collapsed. The oversupply led in turn to the development of the European art of scrimshaw.
Herman Melville's novel Moby-Dick is based on a true story about a sperm whale that attacked and sank the whaleship Essex. Melville associated the sperm whale with the Bible's Leviathan. The fearsome reputation perpetuated by Melville was based on bull whales' ability to fiercely defend themselves from attacks by early whalers, smashing whaling boats and, occasionally, attacking and destroying whaling ships.
In Jules Verne's 1870 novel Twenty Thousand Leagues Under the Seas, the Nautilus fights a group of "cachalots" (sperm whales) to protect a pod of southern right whales from their attacks. Verne portrays them as being savage hunters ("nothing but mouth and teeth").
The sperm whale was designated as the Connecticut state animal by the General Assembly in 1975. It was selected because of its specific contribution to the state's history and because of its present-day plight as an endangered species.
Watching sperm whales
Sperm whales are not the easiest of whales to watch, due to their long dive times and ability to travel long distances underwater. However, due to the distinctive look and large size of the whale, watching is increasingly popular. Sperm whale watchers often use hydrophones to listen to the clicks of the whales and locate them before they surface. Popular locations for sperm whale watching include the town of Kaikōura on New Zealand's South Island, Andenes and Tromsø in Arctic Norway; as well as the Azores, where the continental shelf is so narrow that whales can be observed from the shore, and Dominica where a long-term scientific research program, The Dominica Sperm Whale Project, has been in operation since 2005.
Plastic waste
The introduction of plastic waste to the ocean environment by humans is relatively new. From the 1970s, sperm whales have occasionally been found with pieces of plastic in their stomachs.
See also
List of sperm whale strandings
List of cetaceans
List of individual cetaceans
Marine biology
Livyatan
Notes
References
Further reading
External links
The Dominica Sperm Whale Project- a long-term scientific research program focusing on the behaviour of sperm whale units.
Spermaceti in candles 22 July 2007
Society for Marine Mammalogy Sperm Whale Fact Sheet
US National Marine Fisheries Service Sperm Whale web page
70South—information on the sperm whale
"Physty"-stranded sperm whale nursed back to health and released in 1981
ARKive—Photographs, video.
Whale Trackers—An online documentary film exploring the sperm whales in the Mediterranean Sea.
Convention on Migratory Species page on the sperm whale
Website of the Memorandum of Understanding for the Conservation of Cetaceans and Their Habitats in the Pacific Islands Region
Official website of the Agreement on the Conservation of Cetaceans in the Black Sea, Mediterranean Sea and Contiguous Atlantic Area
Retroposon analysis of major cetacean lineages: The monophyly of toothed whales and the paraphyly of river dolphins 19 June 2001
Voices in the Sea – sounds of the sperm whale
Sperm whales quickly learned to avoid humans who were hunting them in the 19th century, scientists say. ABC News. 16 March 2021.
Apex predators
Mammals described in 1758
EDGE species
Sperm whales
Symbols of Connecticut
Cosmopolitan mammals
Articles containing video clips
Animals that use echolocation
Taxa named by Carl Linnaeus
ESA endangered species | Sperm whale | [
"Biology"
] | 12,139 | [
"EDGE species",
"Biodiversity"
] |
313,565 | https://en.wikipedia.org/wiki/Cognitivism%20%28psychology%29 | In psychology, cognitivism is a theoretical framework for understanding the mind that gained credence in the 1950s. The movement was a response to behaviorism, which cognitivists said neglected to explain cognition. Cognitive psychology derived its name from the Latin cognoscere, referring to knowing and information, thus cognitive psychology is an information-processing psychology derived in part from earlier traditions of the investigation of thought and problem solving.
Behaviorists acknowledged the existence of thinking but identified it as a behavior. Cognitivists argued that the way people think impacts their behavior and therefore cannot be a behavior in and of itself. Cognitivists later claimed that thinking is so essential to psychology that the study of thinking should become its own field. However, cognitivists typically presuppose a specific form of mental activity, of the kind advanced by computationalism.
Cognitivism has more recently been challenged by postcognitivism.
Cognitive development
The process of assimilating and expanding our intellectual horizon is termed as cognitive development. We have a complex physiological structure that absorbs a variety of stimuli from the environment, stimuli being the interactions that are able to produce knowledge and skills. Parents process knowledge informally in the home while teachers process knowledge formally in school. Knowledge should be pursued with zest and zeal; if not, then learning becomes a burden.
Attention
Attention is the first part of cognitive development. It pertains to a person's ability to focus and sustain concentration. Attention can also be how focus minded an individual is and having their full concentration on one thing. It is differentiated from other temperamental characteristics like persistence and distractibility in the sense that the latter modulates an individual's daily interaction with the environment. Attention, on the other hand, involves his behavior when performing specific tasks. Learning, for instance, takes place when the student gives attention towards the teacher. Interest and effort closely relate to attention. Attention is an active process which involves numerous outside stimuli. The attention of an organism at any point in time involves three concentric circles; beyond awareness, margin, and focus. Individuals have a mental capacity; there are only so many things someone can focus on at one time.
A theory of cognitive development called information processing holds that memory and attention are the foundation of cognition. It is suggested that children's attention is initially selective and is based on situations that are important to their goals. This capacity increases as the child grows older since they are more able to absorb stimuli from tasks. Another conceptualization classified attention into mental attention and perceptual attention. The former is described as the executive-driven attentional "brain energy" that activates task-relevant processes in the brain while the latter are immediate or spontaneous attention driven by novel perceptual experiences.
Process of learning
Cognitive theory mainly stresses the acquisition of knowledge and growth of the mental structure. Cognitive theory tends to focus on conceptualizing the student's learning process: how information is received; how information is processed and organized into existing schema; how information is retrieved upon recall. In other words, cognitive theory seeks to explain the process of knowledge acquisition and the subsequent effects on the mental structures within the mind. Learning is not about the mechanics of what a learner does, but rather a process depending on what the learner already knows (existing information) and their method of acquiring new knowledge (how they integrate new information into their existing schemas). Knowledge acquisition is an activity consisting of internal codification of mental structures within the student's mind. Inherent to the theory, the student must be an active participant in their own learning process. Cognitive approaches mainly focus on the mental activities of the learner like mental planning, goal setting, and organizational strategies.
In cognitive theories not only the environmental factors and instructional components play an important role in learning. There are additional key elements like learning to code, transform, rehearse, and store and retrieve the information. The learning process includes learner's thoughts, beliefs, and attitude values.
Role of memory
Memory plays a vital role in the learning process. Information is stored within memory in an organised, meaningful manner. Here, teacher and designers play different roles in the learning process. Teachers supposedly facilitate learning and the organization of information in an optimal way. Whereas designers supposedly use advanced techniques (such as analogies, mnemonic devices, and hierarchical relationships) to help learners acquire new information to add to their prior knowledge. Forgetting is described as an inability to retrieve information from memory. Memory loss may be a mechanism used to discard situationally irrelevant information by assessing the relevance of newly acquired information.
Process of transfer
According to cognitive theory, if a learner knows how to implement knowledge in different contexts and conditions, then we can say that transfer has occurred. Understanding is composed of knowledge - in the form of rules, concepts and discrimination. Knowledge stored in memory is important, but the use of such knowledge is also important. Prior knowledge will be used for identifying similarities and differences between itself and novel information.
Types of learning explained in detail by this position
Cognitive theory mostly explains complex forms of learning in terms of reasoning, problem solving and information processing. Emphasis must be placed on the fact that the goal of all aforementioned viewpoints is considered to be the same - the transfer of knowledge to the student in the most efficient and effective manner possible. Simplification and standardization are two techniques used to enhance the effectiveness and efficiency of knowledge transfer. Knowledge can be analysed, decomposed and simplified into basic building blocks. There is a correlation with the behaviorist model of the knowledge transfer environment. Cognitivists stress the importance of efficient processing strategies.
Basic principles of the cognitive theory and relevance to instructional design
A behaviorist uses feedback (reinforcement) to change the behavior in the desired direction, while the cognitivist uses the feedback for guiding and supporting the accurate mental connections.
For different reasons learners' task analyzers are critical to both cognitivists and behaviorists. Cognitivists look at the learner's predisposition to learning (How does the learner activate, maintain, and direct their learning?). Additionally, cognitivists examine the learners' 'how to design' instruction that it can be assimilated. (i.e., what about the learner's existing mental structures?) In contrast, the behaviorists look to determine where the lesson should begin (i.e., at what level the learners are performing successfully?) and what are the most effective reinforcements (i.e., What are the consequences that are most desired by the learner?).
There are some specific assumptions or principles that direct the instructional design: active involvement of the learner in the learning process, learner control, metacognitive training (e.g., self-planning, monitoring, and revising techniques), the use of hierarchical analyses to identify and illustrate prerequisite relationships (cognitive task analysis procedure), facilitating optimal processing of structuring, organizing and sequencing information (use of cognitive strategies such as outlining, summaries, synthesizers, advance organizers etc.), encouraging the students to make connections with previously learned material, and creating learning environments (recall of prerequisite skills; use of relevant examples, analogies).
Structuring instruction
Cognitive theories emphasize mainly on making knowledge meaningful and helping learners to organize and relate new information to existing knowledge in memory. Instruction should be based on students' existing schema or mental structures, to be effective. The organisation of information is connected in such a manner that it should relate to the existing knowledge in some meaningful way. Examples of cognitive strategies include the use of analogies and metaphors, framing, outlining the mnemonics, concept mapping, advance organizers, and so forth. The cognitive theory mainly emphasizes the major tasks of the teacher / designer and includes analyzing various learning experiences to the learning situation, which can impact learning outcomes of different individuals.
Organizing and structuring the new information to connect the learners' previously acquired knowledge abilities and experiences.
The new information is effectively and efficiently assimilated/accommodated within the learners cognitive structure.
Theoretical approach
Cognitivism has two major components, one methodological, the other theoretical. Methodologically, cognitivism has a positivist approach and says that psychology can be (in principle) fully explained by the use of the scientific method, there is speculation on whether or not this is true. This is also largely a reductionist goal, with the belief that individual components of mental function (the 'cognitive architecture') can be identified and meaningfully understood. The second says that cognition contains discrete and internal mental states (representations or symbols) that can be changed using rules or algorithms.
Cognitivism became the dominant force in psychology in the late-20th century, replacing behaviorism as the most popular paradigm for understanding mental function. Cognitive psychology is not a wholesale refutation of behaviorism, but rather an expansion that accepts that mental states exist. This was due to the increasing criticism towards the end of the 1950s of simplistic learning models. One of the most notable criticisms was Noam Chomsky's argument that language could not be acquired purely through conditioning, and must be at least partly explained by the existence of internal mental states.
The main issues that interest cognitive psychologists are the inner mechanisms of human thought and the processes of knowing. Cognitive psychologists have attempted to shed some light on the alleged mental structures that stand in a causal relationship to our physical actions.
Criticisms of psychological cognitivism
In the 1990s, various new theories emerged that challenged cognitivism and the idea that thought was best described as computation. Some of these new approaches, often influenced by phenomenological and postmodern philosophy, include situated cognition, distributed cognition, dynamicism and embodied cognition. Some thinkers working in the field of artificial life (for example Rodney Brooks) have also produced non-cognitivist models of cognition. On the other hand, much of early cognitive psychology, and the work of many currently active cognitive psychologists, does not treat cognitive processes as computational.
The idea that mental functions can be described as information processing models has been criticised by philosopher John Searle and mathematician Roger Penrose who both argue that computation has some inherent shortcomings which cannot capture the fundamentals of mental processes.
Penrose uses Gödel's incompleteness theorem (which states that there are mathematical truths which can never be proven in a sufficiently strong mathematical system; any sufficiently strong system of axioms will also be incomplete) and Turing's halting problem (which states that there are some things which are inherently non-computable) as evidence for his position.
Searle has developed two arguments, the first (well known through his Chinese room thought experiment) is the 'syntax is not semantics' argument—that a program is just syntax, while understanding requires semantics; therefore programs (hence cognitivism) cannot explain understanding. Such an argument presupposes the controversial notion of a private language. The second, which Searle now prefers but is less well known, is his 'syntax is not physics' argument—nothing in the world is intrinsically a computer program except as applied, described, or interpreted by an observer, so either everything can be described as a computer and trivially a brain can but then this does not explain any specific mental processes, or there is nothing intrinsic in a brain that makes it a computer (program). Many oppose these views and have criticized his arguments, which have created significant disagreement. Both points, Searle claims, refute cognitivism.
Another argument against cognitivism is the problems of Ryle's Regress or the homunculus fallacy. Cognitivists have offered a number of arguments attempting to refute these attacks.
See also
References
Further reading
Costall, A. and Still, A. (eds) (1987) Cognitive Psychology in Question. Brighton: Harvester Press Ltd.
Searle, J. R. Is the brain a digital computer APA Presidential Address
Wallace, B ., Ross, A., Davies, J.B., and Anderson T., (eds) (2007) The Mind, the Body and the World: Psychology after Cognitivism. London: Imprint Academic.
Cognitive psychology
Cognitive science
Philosophy of psychology
Psychological concepts
Psychological schools
Psychological theories | Cognitivism (psychology) | [
"Biology"
] | 2,514 | [
"Behavioural sciences",
"Behavior",
"Cognitive psychology"
] |
313,591 | https://en.wikipedia.org/wiki/Spermaceti | Spermaceti is a waxy substance found in the head cavities of the sperm whale (and, in smaller quantities, in the oils of other whales). Spermaceti is created in the spermaceti organ inside the whale's head. This organ may contain as much as of spermaceti. It has been extracted by whalers since the 17th century for human use in cosmetics, textiles, and candles.
Theories for the spermaceti organ's biological function suggest that it may control buoyancy, act as a focusing apparatus for the whale's sense of echolocation, or possibly both. Concrete evidence supports both theories. The buoyancy theory holds that the sperm whale is capable of heating the spermaceti, lowering its density and thus allowing the whale to float; for the whale to sink again, it must take water into its blowhole, which cools the spermaceti into a denser solid. This claim has been called into question by recent research that indicates a lack of biological structures to support this heat exchange, and the fact that the change in density is too small to be meaningful until the organ grows to a huge size. Measurement of the proportion of wax esters retained by a harvested sperm whale accurately described the age and future life expectancy of a given individual. The level of wax esters in the spermaceti organ increases with the age of the whale: 38–51% in calves, 58–87% in adult females, and 71–94% in adult males.
Spermaceti wax is extracted from sperm oil by crystallisation at , when treated by pressure and a chemical solution of caustic alkali. Spermaceti forms brilliant white crystals that are hard but oily to the touch, and are devoid of taste or smell, making it very useful as an ingredient in cosmetics, leatherworking, and lubricants. The substance was also used in making candles of a standard photometric value, in the dressing of fabrics, and as a pharmaceutical excipient, especially in cerates and ointments.
The whaling industry in the 17th and 18th centuries was developed to find, harvest, and refine the contents of the head of a sperm whale. The crews seeking spermaceti routinely left on three-year tours on several oceans. Cetaceous lamp oil was a commodity that created many maritime fortunes. Candlepower, a photometric unit defined in the United Kingdom Act of Parliament Metropolitan Gas Act 1860 and adopted at the International Electrotechnical Conference of 1883, was based on the light produced by a single, pure spermaceti candle.
Etymology
Spermaceti is derived from Medieval Latin sperma ceti, meaning "whale sperm" (from Latin sperma meaning "semen" or "seed", and ceti, the genitive form of "whale"). The substance was initially believed to be whale semen, due to its appearance when fresh. The substance is also the origin of the name of the sperm whale.
Properties
Raw spermaceti is liquid within the head of the sperm whale, and is said to have a smell similar to raw milk. It is composed mostly of wax esters (chiefly cetyl palmitate) and a smaller proportion of triglycerides. Unlike other toothed whales, most of the carbon chains in the wax esters are relatively long (). The blubber oil of the whale is about 66% wax. When it cools to 30 °C or below, the waxes begin to solidify. The speed of sound in spermaceti is 2,684 m/s (at 40 kHz, 36 °C), making it nearly twice as good a conductor of sounds as the oil in a dolphin's melon.
Spermaceti is insoluble in water, very slightly soluble in cold ethanol, but easily dissolved in ether, chloroform, carbon disulfide, and boiling ethanol. Spermaceti consists principally of cetyl palmitate (the ester of cetyl alcohol and palmitic acid), . Simple triglycerides are seen as well.
A botanical alternative to spermaceti is a derivative of jojoba oil, jojoba esters, , a solid wax, which is chemically and physically very similar to spermaceti and may be used in many of the same applications.
Biological function
Currently, disagreement exists on what biological purpose or purposes spermaceti serves. The proportion of wax esters retained by an average (living) whale head appears to reflect buoyancy influenced by heat. Changes in density likely enhance echolocation. It might be used as a means of adjusting the whale's buoyancy, since the density of the spermaceti changes with its phase. Another hypothesis has been that it is used as a cushion to protect the sperm whale's delicate snout while diving.
The most likely primary function of the spermaceti organ is to add internal echo or resonator clicks to the sonar echolocation clicks emitted by the respiratory organs. This makes possible the whale sensing the motion of its prey and its position. The changing distance to the prey affects the time interval between the returning clicks reflected by the prey. This would explain the low density and high compressibility of the spermaceti, which enhance the resonance by the contrast of the acoustic properties of the sea water and of the hard tissue surrounding the spermaceti.
Spermaceti processing
After killing a sperm whale, the whalers would pull the carcass alongside the ship, cut off the head and pull it on deck. Then, they would cut a hole in it and bail out the matter inside with a bucket. The harvested matter, raw spermaceti, was stored in casks to be processed back on land. A large whale could yield as much as . The spermaceti was boiled and strained of impurities to prevent it from going rancid. On land, the casks were allowed to chill during the winter, causing the spermaceti to congeal into a spongy and viscous mass. The congealed matter was then loaded into wool sacks and placed in a press to squeeze out the liquid. This liquid was bottled and sold as "winter-strained sperm oil". This was the most valuable product - an oil that remained liquid in freezing winter temperatures.
Later, during the warmer seasons, the leftover solid was allowed to partially melt, and the liquid was strained off to leave a fully solid wax. This wax, brown in color, was then bleached and sold as "spermaceti wax". Spermaceti wax is white and translucent. It melts at about and congeals at .
Gallery
See also
Whale oil
References
Further reading
Waxes
Whale products
Cosmetics chemicals
Articles containing video clips
Animal fats | Spermaceti | [
"Physics"
] | 1,386 | [
"Materials",
"Matter",
"Waxes"
] |
313,670 | https://en.wikipedia.org/wiki/We%20Shall%20Overcome | "We Shall Overcome" is a gospel song that is associated heavily with the U.S. civil rights movement. The origins of the song are unclear; it was thought to have descended from "I'll Overcome Some Day," a hymn by Charles Albert Tindley, while the modern version of the song was first said to have been sung by tobacco workers led by Lucille Simmons during the 1945–1946 Charleston Cigar Factory strike in Charleston, South Carolina.
In 1947, the song was published under the title "We Will Overcome" in an edition of the People's Songs Bulletin, as a contribution of and with an introduction by Zilphia Horton, then-music director of the Highlander Folk School of Monteagle, Tennessee—an adult education school that trained union organizers. She taught it to many others, including People's Songs director Pete Seeger, who included it in his repertoire, as did many other activist singers, such as Frank Hamilton and Joe Glazer.
In 1959, the song began to be associated with the civil rights movement as a protest song, when Guy Carawan stepped in with his and Seeger's version as song leader at Highlander, which was then focused on nonviolent civil rights activism. It quickly became the movement's unofficial anthem. Seeger and other famous folksingers in the early 1960s, such as Joan Baez, sang the song at rallies, folk festivals, and concerts in the North and helped make it widely known. Since its rise to prominence, the song, and songs based on it, have been used in a variety of protests worldwide.
The U.S. copyright of the People's Songs Bulletin issue which contained "We Will Overcome" expired in 1976, but The Richmond Organization (TRO) asserted a copyright on the "We Shall Overcome" lyrics, registered in 1960. In 2017, in response to a lawsuit against TRO over allegations of false copyright claims, a U.S. judge issued an opinion that the registered work was insufficiently different from the "We Will Overcome" lyrics that had fallen into the public domain because of non-renewal. In January 2018, the company agreed to a settlement under which it would no longer assert any copyright claims over the song.
Origins as gospel, folk, and labor song
"I'll Overcome Some Day" was a hymn or gospel music composition by the Reverend Charles Albert Tindley of Philadelphia that was first published in 1901. A noted minister of the Methodist Episcopal Church, Tindley was the author of approximately 50 gospel hymns, of which "We'll Understand It By and By" and "Stand By Me" are among the best known. The published text bore the epigraph, "Ye shall overcome if ye faint not", derived from Galatians 6:9: "And let us not be weary in doing good, for in due season we shall reap, if we faint not." The first stanza began:
Tindley's songs were written in an idiom rooted in African American folk traditions, using pentatonic intervals, with ample space allowed for improvised interpolation, the addition of "blue" thirds and sevenths, and frequently featuring short refrains in which the congregation could join. Tindley's importance, however, was primarily as a lyricist and poet whose words spoke directly to the feelings of his audiences, many of whom had been freed from slavery only 36 years before he first published his songs, and were often impoverished, illiterate, and newly arrived in the North. "Even today," wrote musicologist Horace Boyer in 1983, "ministers quote his texts in the midst of their sermons as if they were poems, as indeed they are."
A letter printed on the front page of February 1909, United Mine Workers Journal states: "Last year at a strike, we opened every meeting with a prayer, and singing that good old song, 'We Will Overcome'." This statement implied that the song was well-known, and it was also the first acknowledgment of such a song having been sung in both a secular context and a mixed-race setting.
Tindley's "I'll Overcome Some Day" was believed to have influenced the structure for "We Shall Overcome", with both the text and the melody having undergone a process of alteration. The tune has been changed so that it now echoes the opening and closing melody of "No More Auction Block For Me", also known from its refrain as "Many Thousands Gone". This was number 35 in Thomas Wentworth Higginson's collection of Negro Spirituals that appeared in the Atlantic Monthly of June 1867, with a comment by Higginson reflecting on how such songs were composed (i.e., whether the work of a single author or through what used to be called "communal composition"):
Bob Dylan used the same melodic motif from "No More Auction Block" for his composition, "Blowin' in the Wind". Thus similarities of melodic and rhythmic patterns imparted cultural and emotional resonance ("the same feeling") towards three different, and historically very significant songs.
Music scholars have also pointed out that the first half of "We Shall Overcome" bears a notable resemblance to the famous lay Catholic hymn "O Sanctissima", also known as "The Sicilian Mariners Hymn", first published by a London magazine in 1792 and then by an American magazine in 1794 and widely circulated in American hymnals. The second half of "We Shall Overcome" is essentially the same music as the 19th-century hymn "I'll Be All Right". As Victor Bobetsky summarized in his 2015 book on the subject: "'We Shall Overcome' owes its existence to many ancestors and to the constant change and adaptation that is typical of the folk music process."
Role of the Highlander Folk School
In October 1945 in Charleston, South Carolina, members of the Food, Tobacco, Agricultural, and Allied Workers union (FTA-CIO), who were mostly female and African American, began a five-month strike against the American Tobacco Company. To keep up their spirits during the cold, wet winter of 1945–1946, one of the strikers, a woman named Lucille Simmons, led a slow "long meter style" version of the gospel hymn, "We'll Overcome (I'll Be All Right)" to end each day's picketing. Union organizer Zilphia Horton, who was the wife of the co-founder of the Highlander Folk School (later Highlander Research and Education Center), said she learned it from Simmons. Horton was Highlander's music director during 1935–1956, and it became her custom to end group meetings each evening by leading this, her favorite song. During the presidential campaign of Henry A. Wallace, "We Will Overcome" was printed in Bulletin No. 3 (September 1948), 8, of People's Songs, with an introduction by Horton saying that she had learned it from the interracial FTA-CIO workers and had found it to be extremely powerful. Pete Seeger, a founding member of People's Songs and its director for three years, learned it from Horton's version in 1947. Seeger writes: "I changed it to 'We shall'... I think I liked a more open sound; 'We will' has alliteration to it, but 'We shall' opens the mouth wider; the 'i' in 'will' is not an easy vowel to sing well ...." Seeger also added some verses ("We'll walk hand in hand" and "The whole wide world around").
In 1950, the CIO's Department of Education and Research released the album, Eight New Songs for Labor, sung by Joe Glazer ("Labor's Troubador"), and the Elm City Four. (Songs on the album were: "I Ain't No Stranger Now", "Too Old to Work", "That's All", "Humblin' Back", "Shine on Me", "Great Day", "The Mill Was Made of Marble", and "We Will Overcome".) During a Southern CIO drive, Glazer taught the song to country singer Texas Bill Strength, who cut a version that was later picked up by 4-Star Records.
The song made its first recorded appearance as "We Shall Overcome" (rather than "We Will Overcome") in 1952 on a disc recorded by Laura Duncan (soloist) and The Jewish Young Singers (chorus), conducted by Robert De Cormier, co-produced by Ernie Lieberman and Irwin Silber on Hootenany Records (Hoot 104-A) (Folkways, FN 2513, BCD15720), where it is identified as a Negro Spiritual.
Frank Hamilton, a folk singer from California who was a member of People's Songs and later The Weavers, picked up Seeger's version. Hamilton's friend and traveling companion, fellow-Californian Guy Carawan, learned the song from Hamilton. Carawan and Hamilton, accompanied by Ramblin Jack Elliot, visited Highlander in the early 1950s where they also would have heard Zilphia Horton sing the song. In 1957, Seeger sang for a Highlander audience that included Dr. Martin Luther King Jr., who remarked on the way to his next stop, in Kentucky, about how much the song had stuck with him. When, in 1959, Guy Carawan succeeded Horton as music director at Highlander, he reintroduced it at the school. It was the young (many of them teenagers) student-activists at Highlander, however, who gave the song the words and rhythms for which it is currently known, when they sang it to keep their spirits up during the frightening police raids on Highlander and their subsequent stays in jail in 1959–1960. Because of this, Carawan has been reluctant to claim credit for the song's widespread popularity. In the PBS video We Shall Overcome, Julian Bond credits Carawan with teaching and singing the song at the founding meeting of the Student Nonviolent Coordinating Committee in Raleigh, North Carolina, in 1960. From there, it spread orally and became an anthem of Southern African American labor union and civil rights activism. Seeger has also publicly, in concert, credited Carawan with the primary role of teaching and popularizing the song within the civil rights movement.
Use in the 1960s civil rights and other protest movements
In August 1963, 22-year old folksinger Joan Baez led a crowd of 3,000 in singing "We Shall Overcome" at the Lincoln Memorial during A. Philip Randolph's March on Washington. President Lyndon Johnson, himself a Southerner, used the phrase "we shall overcome" in addressing Congress on March 15, 1965, in a speech delivered after the violent "Bloody Sunday" attacks on civil rights demonstrators during the Selma to Montgomery marches, thus legitimizing the protest movement.
Four days before the April 4, 1968 assassination of Martin Luther King Jr., King recited the words from "We Shall Overcome" in his final sermon, delivered in Memphis on Sunday, March 31. He had done so in a similar sermon he gave previously in 1965 to an interfaith congregation at Temple Israel of Hollywood, California:
"We Shall Overcome" was sung days later by over fifty thousand attendees at the funeral of Martin Luther King Jr.
Farmworkers in the United States later sang the song in Spanish during the strikes and grape boycotts of the late 1960s. The song was notably sung by the U.S. Senator for New York Robert F. Kennedy, when he led anti-Apartheid crowds in choruses from the rooftop of his car while touring South Africa in 1966. It was also the song which Abie Nathan chose to broadcast as the anthem of the Voice of Peace radio station on October 1, 1993, and as a result it found its way back to South Africa in the later years of the Anti-Apartheid Movement.
William Bradford Reynolds, facing a mounting torrent of criticism for not moving fast enough on civil rights enforcement in the 1980s, sang "We Shall Overcome" hand in hand with Jesse Jackson on a trip to meet with the black communities of the Mississippi Delta.
The Northern Ireland Civil Rights Association adopted "we shall overcome" as a slogan and used it in the title of its retrospective publication, We Shall Overcome – The History of the Struggle for Civil Rights in Northern Ireland 1968–1978. The film Bloody Sunday depicts march leader and Member of Parliament (MP) Ivan Cooper leading the song shortly before 1972's Bloody Sunday shootings. In 1997, the Christian men's ministry, Promise Keepers featured the song on its worship CD for that year: The Making of a Godly Man, featuring worship leader Donn Thomas and the Maranatha! Promise Band. Bruce Springsteen's re-interpretation of the song was included on the 1998 tribute album Where Have All the Flowers Gone: The Songs of Pete Seeger as well as on Springsteen's 2006 album We Shall Overcome: The Seeger Sessions.
Widespread adaptation
"We Shall Overcome" was adopted by various labor, nationalist, and political movements both during and after the Cold War. In his memoir about his years teaching English in Czechoslovakia after the Velvet Revolution, Mark Allen wrote:
The words "We shall overcome" are sung emphatically at the end of each verse in a song of Northern Ireland's civil rights movement, Free the People, which protested against the internment policy of the British Army. The movement in Northern Ireland was keen to emulate the movement in the US and often sang "We shall overcome".
The melody was also used (crediting it to Tindley) in a symphony by American composer William Rowland. In 1999, National Public Radio included "We Shall Overcome" on the "NPR 100" list of most important American songs of the 20th century. As a reference to the line, in 2009, after the first inauguration of Barack Obama as the 44th President of the United States, a man holding the banner, "WE HAVE OVERCOME" was seen near the Capitol, a day after hundreds of people posed with the sign on Martin Luther King Jr. Day.
As the attempted serial killer "Lasermannen" shot several immigrants around Stockholm in 1992, Prime Minister Carl Bildt and Immigration Minister Birgit Friggebo attended a meeting in Rinkeby. As the audience became upset, Friggebo tried to calm them down by proposing everyone sing "We Shall Overcome". This statement is widely regarded as one of the most embarrassing moments in Swedish politics. In 2008, the newspaper Svenska Dagbladet listed the Sveriges Television recording of the event as the best political clip available on YouTube.
On June 7, 2010, Roger Waters of Pink Floyd fame released a new version of the song as a protest against the Israeli blockade of Gaza.
On July 22, 2012, Bruce Springsteen performed the song during the memorial-concert in Oslo after the terrorist attacks in Norway on July 22, 2011.
In India, the renowned poet Girija Kumar Mathur composed a literal translation in Hindi "Hum Honge Kaamyab (हम होंगे कामयाब)" which became a popular patriotic/spiritual song during the 1970s and 1980s, particularly in schools. This song also came to be used by the Blue Pilgrims for motivating the India national football team during international matches.
In Bengali-speaking India and Bangladesh, there are two versions, both of which are popular among schoolchildren and political activists. "Amra Korbo Joy" (আমরা করবো জয়)”, is a literal translation by Bengali folk singer Hemanga Biswas, re-recorded by Bhupen Hazarika. Hazarika, who had heard the song during his days in the United States, also translated the song to Assamese as "Ami hom xophol" (আমি হ'ম সফল). Another version, translated by Shibdas Bandyopadhyay, "Ek Din Shurjer Bhor" (এক দিন সূর্যের ভোর, literally "One Day The Sun Will Rise") was arranged by Ruma Guha Thakurta and recorded by the Calcutta Youth Choir during the 1971 Bangladesh War of Independence, becoming one of the bestselling Bengali records. It was a favorite of Prime Minister Sheikh Mujibur Rahman, and it was regularly sung at public events after Bangladesh gained its independence in the early 1970s.
In the Indian state of Kerala, a traditional Communist stronghold, the song became popular on college campuses during the late 1970s. It was the struggle song of the Students Federation of India (SFI), the largest student organisation in the country. The song translated into the local Malayalam as “Nammal Vijayikkum” by SFI activist N. P. Chandrasekharan, using the same tune of the original. Later, it was published in Student, the monthly magazine of SFI in Malayalam as well as in Sarvadesheeya Ganangal (Mythri Books, Thiruvananthapuram), a translation of international struggle songs.
"We Shall Overcome" was a prominent song in the 2010 Bollywood film My Name is Khan, which compared the struggle of Muslims in modern America with the struggles of African Americans in the past. The song was sung in both English and Hindi in the film, which starred Kajol and Shahrukh Khan.
In 2014, a recording of We Shall Overcome arranged by composer Nolan Williams Jr. and featuring mezzo-soprano Denyce Graves was among several works of art, including the poem A Brave and Startling Truth by Maya Angelou, were sent to space on the first test flight of the spacecraft Orion.
The Argentine writer and singer María Elena Walsh wrote a Spanish version called "Venceremos".
Celtic punk band Dropkick Murphys released their version of the song as a single and music video in 2022. Their version can also be found on the expanded edition of their 2021 album, Turn Up That Dial.
Copyright status
The copyright status of "We Shall Overcome" was disputed in the late 2010s. A copyright registration was made for the song in 1960, which is credited as an arrangement by Zilphia Horton, Guy Carawan, Frank Hamilton, and Pete Seeger, of a work entitled "I'll Overcome", with no known original author. Horton's heirs, Carawan, Hamilton, and Seeger share the artists' half of the rights, and The Richmond Organization (TRO), which includes Ludlow Music, Essex, Folkways Music, and Hollis Music, holds the publishers' rights, to 50% of the royalty earnings. Seeger explained that he registered the copyright under the advice of TRO, who showed concern that someone else could register it. "At that time we didn't know Lucille Simmons' name", Seeger said. Their royalties go to the "We Shall Overcome" Fund, administered by Highlander under the trusteeship of the "writers". Such funds are purportedly used to give small grants for cultural expression involving African Americans organizing in the U.S. South.
In April 2016, a lawsuit was filed against TRO and Ludlow by the We Shall Overcome Foundation (WSOF), a group led by producer Isaias Gamboa that was denied permission to use the song in a documentary on its history. The suit alleged that the TRO-Ludlow copyright claims were invalid because the copyright had not been renewed as required by United States copyright law at the time, and that the copyright of the 1948 People's Songs publication containing "We Will Overcome" had therefore expired in 1976. Additionally, it was argued that the registered copyrights only covered specific arrangements of the tune and "obscure alternate verses", that the registered works "did not contain original works of authorship, except to the extent of the arrangements themselves", and that no record of a work entitled "I'll Overcome" existed in the database of the United States Copyright Office. The WSOF was working on a documentary about the song and its history, and were denied permission to use the song by TRO-Ludlow. The suit sought to have the copyright status of the song clarified, and the return of all royalties collected by the companies from its usage.
The suit acknowledged that Seeger himself had not claimed to be an author of the song, stating of the song in his autobiography, "No one is certain who changed 'will' to 'shall.' It could have been me with my Harvard education. But Septima Clarke, a Charleston schoolteacher (who was director of education at Highlander and after the civil rights movement was elected year after year to the Charleston, S.C. Board of Education) always preferred 'shall.' It sings better." He also reaffirmed that the decision to copyright the song was a defensive measure, with his publisher apparently warning him that "if you don't copyright this now, some Hollywood types will have a version out next year like 'Come on Baby, We shall overcome tonight. Furthermore, the liner notes of Seeger's compilation album If I Had a Hammer: Songs of Hope & Struggle contained a summary on the purported history of the song, stating that "We Shall Overcome" was "probably adapted from the 19th-century hymn, 'I'll Be All Right, and that "I'll Overcome Some Day" was a "possible source" and may have originally been adapted from "I'll Be All Right".
Gamboa had shown interest in investigating the origins of "We Shall Overcome"; in a book entitled We Shall Overcome: Sacred Song On The Devil's Tongue, he notably disputed the song's claimed origins and copyright registration with an alternate theory, suggesting that "We Shall Overcome" was actually derived from "If My Jesus Wills", a hymn by Louise Shropshire that had been composed in the 1930s and had its copyright registered in 1954. The WSOF lawsuit did not invoke this theory, focusing instead on the original belief that the song stemmed from "We Will Overcome". The lawyer backing Gamboa's suit, Mark C. Rifkin, was previously involved in a case that invalidated copyright claims over the song "Happy Birthday to You".
On September 8, 2017, Judge Denise Cote of the Southern District of New York issued an opinion that there were insufficient differences between the first verse of the "We Shall Overcome" lyrics registered by TRO-Ludlow, and the "We Will Overcome" lyrics from People's Songs (specifically, the aforementioned replacement of "will" with "shall", and changing "down in my heart" to "deep in my heart") for it to qualify as a distinct derivative work eligible for its own copyright.
On January 26, 2018, TRO-Ludlow agreed to a final settlement, under which it would no longer claim copyright over the melody or lyrics to "We Shall Overcome". In addition, TRO-Ludlow agreed that the melody and lyrics were thereafter dedicated to the public domain.
See also
Civil rights movement in popular culture
Timeline of the civil rights movement
Christian child's prayer § Spirituals
Notes
References
Dylan, LiHow Can I Keep from Singing: Pete Seeger, (orig. pub. 1981, reissued 1990). Da Capo, New York, .
___, "The We Shall Overcome Fund". Highlander Reports, newsletter of the Highlander Research and Education Center, August–November 2004, p. 3.
We Shall Overcome, PBS Home Video 174, 1990, 58 minutes.
Further reading
Sing for Freedom: The Story of the Civil Rights Movement Through Its Songs: Compiled and edited by Guy and Candie Carawan; foreword by Julian Bond (New South Books, 2007), comprising two classic collections of freedom songs: We Shall Overcome (1963) and Freedom Is A Constant Struggle (1968), reprinted in a single edition. The book includes a major new introduction by Guy and Candie Carawan, words and music to the songs, important documentary photographs, and firsthand accounts by participants in the civil rights movement. Available from Highlander Center.
We Shall Overcome! Songs of the Southern Freedom Movement: Julius Lester, editorial assistant. Ethel Raim, music editor: Additional musical transcriptions: Joseph Byrd [and] Guy Carawan. New York: Oak Publications, 1963.
Freedom is a Constant Struggle, compiled and edited by Guy and Candie Carawan. Oak Publications, 1968.
Alexander Tsesis, We Shall Overcome: A History of Civil Rights and the Law. Yale University Press, 2008.
We Shall Overcome: A Song that Changed the World, by Stuart Stotts, illustrated by Terrance Cummings, foreword by Pete Seeger. New York: Clarion Books, 2010.
Sing for Freedom, Folkways Records, produced by Guy and Candie Carawan, and the Highlander Center. Field recordings from 1960 to 1988, with the Freedom Singers, Birmingham Movement Choir, Georgia Sea Island Singers, Doc Reese, Phil Ochs, Pete Seeger, Len Chandler, and many others. Smithsonian-Folkways CD version 1990.
We Shall Overcome: The Complete Carnegie Hall Concert, June 8, 1963, Historic Live recording June 8, 1963. 2-disc set, includes the full concert, starring Pete Seeger, with the Freedom Singers, Columbia # 45312, 1989. Re-released 1997 by Sony as a box CD set.
Voices Of The Civil Rights Movement: Black American Freedom Songs 1960–1966. Box CD set, with the Freedom Singers, Fanny Lou Hammer, and Bernice Johnson Reagon. Smithsonian-Folkways CD ASIN: B000001DJT (1997).
Durman, C 2015, 'We Shall Overcome: Essays on a Great American Song edited by Victor V. Bobetsky', Music Reference Services Quarterly, vol. 8, iss. 3, pp. 185–187
Graham, D 2016, "Who Owns 'We Shall Overcome'?", The Atlantic, 14 April, accessed 28 April 2017, Who Owns 'We Shall Overcome'?
Clark, B. & Borchert, S 2015, "Pete Seeger, Musical Revolutionary", Monthly Review, vol. 66, no. 8, pp. 20–29
External links
Lyrics
Authorized Profile of Guy Carawan with history of the song, "We Shall Overcome" from the Association of Cultural Equity
Freedom in the Air: Albany Georgia. 1961–62. SNCC #101. Recorded by Guy Carawan, produced for the Student Non-Violent Coordinating Committee by Guy Carawan and Alan Lomax. "Freedom In the Air ... is a record of the 1961 protest in Albany, Georgia, when, two weeks before Christmas, 737 people brought the town nearly to a halt to force its integration. The record's never been reissued and that's a shame, as it's a moving document of a community through its protest songs, church services, and experiences in the thick of the civil rights struggle."—Nathan Salsburg, host, Root Hog or Die, East Village Radio, January 2007.
Susanne's Folksong-Notizen, excerpts from various articles, liner notes, etc. about "We Shall Overcome".
Musical Transcription of "We Shall Overcome," based on a recording of Pete Seeger's version, sung with the SNCC Freedom Singers on the 1963 live Carnegie Hall recording, and the 1988 version by Pete Seeger sung at a reunion concert with Pete and the Freedom Singers on the anthology, Sing for Freedom, recorded in the field 1960–88 and edited and annotated by Guy and Candie Carawan, released in 1990 as Smithsonian-Folkways CD SF 40032.
NPR news article including full streaming versions of Pete Seeger's classic 1963 live Carnegie Hall recording and Bruce Springsteen's tribute version.
"Pete Seeger & the story of 'We Shall Overcome'" from 1968 interview on The Pop Chronicles.
"Something About That Song Haunts You", essay on the history of "We Shall Overcome," Complicated Fun, June 9, 2006.
"Howie Richmond Views Craft Of Song: Publishing Giant Celebrates 50 Years As TRO Founder", by Irv Lichtman, Billboard, 8, 28, 1999. Excerpt: "Key folk songs in the [TRO] catalog, as arranged by a number of folklorists, are 'We Shall Overcome,' 'Kisses Sweeter Than Wine' 'On Top Of Old Smokey,' 'So Long, It's Been Good To Know You,' 'Goodnight Irene,' 'If I Had A Hammer,' 'Tom Dooley,' and 'Rock Island Line.'"
1900 songs
American folk songs
American patriotic songs
Gospel songs
Bluegrass songs
American Christian hymns
Songs about freedom
Songs of the civil rights movement
Pete Seeger songs
Joan Baez songs
Songs about racism and xenophobia
Roger Waters songs
Mahalia Jackson songs
Peter, Paul and Mary songs
Articles containing video clips
Songs involved in royalties controversies
Slogans
American political catchphrases
1900 quotations
Motivation
Quotations from music
Protest songs | We Shall Overcome | [
"Biology"
] | 5,974 | [
"Ethology",
"Behavior",
"Motivation",
"Human behavior"
] |
313,690 | https://en.wikipedia.org/wiki/WinGate | WinGate is an integrated multi-protocol proxy server, email server and internet gateway from Qbik New Zealand Limited in Auckland. It was first released in October 1995, and began as a re-write of SocketSet, a product that had been previously released in prototype form by Adrien de Croy.
WinGate proved popular, and by the mid- to late 1990s, WinGate was used in homes and small businesses that needed to share a single Internet connection between multiple networked computers. The introduction of Internet Connection Sharing in Windows 98, combined with increasing availability of cheap NAT-enabled routers, forced WinGate to evolve to provide more than just internet connection sharing features. Today, focus for WinGate is primarily access control, email server, caching, reporting, bandwidth management and content filtering.
WinGate comes in three versions, Standard, Professional and Enterprise. The Enterprise edition also provides an easily configured virtual private network system, which is also available separately as WinGate VPN. Licensing is based on the number of concurrently connected users, and a range of license sizes are available. Multiple licenses can also be aggregated.
The current version of WinGate is version 9.4.5, released in October 2022.
Notoriety
Versions of WinGate prior to 2.1d (1997) shipped with an insecure default configuration that - if not secured by the network administrator - allowed untrusted third parties to proxy network traffic through the WinGate server. This made open WinGate servers common targets of crackers looking for anonymous redirectors through which to attack other systems. While WinGate was by no means the only exploited proxy server, its wide popularity amongst users with little experience administering networks made it almost synonymous with open SOCKS proxies in the late 1990s. Furthermore, since a restricted (two users) version of the product was freely available without registration, contacting all WinGate users to notify of security issues was impossible, and therefore even long after the security problems were resolved there were still many insecure installations in use.
Some versions of the Sobig worm installed an unlicensed copy of WinGate 5 in a deliberately insecure configuration to be used by spammers. These installations used non-standard ports for SOCKS and WinGate remote control and so in general did not interfere with other software running on the infected host computer. This resulted in some antivirus software incorrectly identifying WinGate as malware and removing it.
Version history
See also
Internet security
References
External links
WinGate Proxy Server in Italy
Proxy servers
Computer networking
Virtual private networks
Reverse proxy
1995 software
Companies based in Auckland
New Zealand companies established in 1995
Software companies of New Zealand | WinGate | [
"Technology",
"Engineering"
] | 531 | [
"Computer networking",
"Computer science",
"Computer engineering"
] |
313,735 | https://en.wikipedia.org/wiki/Knot%20polynomial | In the mathematical field of knot theory, a knot polynomial is a knot invariant in the form of a polynomial whose coefficients encode some of the properties of a given knot.
History
The first knot polynomial, the Alexander polynomial, was introduced by James Waddell Alexander II in 1923. Other knot polynomials were not found until almost 60 years later.
In the 1960s, John Conway came up with a skein relation for a version of the Alexander polynomial, usually referred to as the Alexander–Conway polynomial. The significance of this skein relation was not realized until the early 1980s, when Vaughan Jones discovered the Jones polynomial. This led to the discovery of more knot polynomials, such as the so-called HOMFLY polynomial.
Soon after Jones' discovery, Louis Kauffman noticed the Jones polynomial could be computed by means of a partition function (state-sum model), which involved the bracket polynomial, an invariant of framed knots. This opened up avenues of research linking knot theory and statistical mechanics.
In the late 1980s, two related breakthroughs were made. Edward Witten demonstrated that the Jones polynomial, and similar Jones-type invariants, had an interpretation in Chern–Simons theory. Viktor Vasilyev and Mikhail Goussarov started the theory of finite type invariants of knots. The coefficients of the previously named polynomials are known to be of finite type (after perhaps a suitable "change of variables").
In recent years, the Alexander polynomial has been shown to be related to Floer homology. The graded Euler characteristic of the knot Floer homology of Peter Ozsváth and Zoltan Szabó is the Alexander polynomial.
Examples
Alexander–Briggs notation organizes knots by their crossing number.
Alexander polynomials and Conway polynomials can not recognize the difference of left-trefoil knot and right-trefoil knot.
So we have the same situation as the granny knot and square knot since the addition of knots in is the product of knots in knot polynomials.
See also
Specific knot polynomials
Alexander polynomial
Bracket polynomial
HOMFLY polynomial
Jones polynomial
Kauffman polynomial
Related topics
Graph polynomial, a similar class of polynomial invariants in graph theory
Tutte polynomial, a special type of graph polynomial related to the Jones polynomial
Skein relation for a formal definition of the Alexander polynomial, with a worked-out example.
Further reading
Knot invariants
Polynomials | Knot polynomial | [
"Mathematics"
] | 477 | [
"Polynomials",
"Algebra"
] |
313,739 | https://en.wikipedia.org/wiki/Current%20solar%20income | The current solar income of the Earth, or any ecoregion of the earth, is the amount of solar energy that falls on it as sunlight. This is thought important in some branches of green economics, as the ultimate measure of renewable energy.
Buckminster Fuller first described the concept in his 1970 paper Cosmic Costing, contrasting the photosynthesis on which natural capital and sustainable infrastructural capital depend, with the chemosynthesis of extracting and using fossil fuels.
Paul Hawken is a more recent advocate of the concept, and views it as central to his notion of a restorative economy. It remains a popular notion among those who believe that toxic waste and maintenance problems of direct solar energy devices can ultimately be overcome, or that yields of passive or biological means of gathering and using this energy as biofuels can be made to approximate those of fossil fuels.
See also
Howard Odum
Vladimir Ivanovich Vernadsky
solar constant
Buckminster Fuller
Renewable energy economics
Systems ecology
Energy economics | Current solar income | [
"Environmental_science"
] | 203 | [
"Environmental social science stubs",
"Energy economics",
"Environmental social science",
"Systems ecology"
] |
313,741 | https://en.wikipedia.org/wiki/3D%20projection | A 3D projection (or graphical projection) is a design technique used to display a three-dimensional (3D) object on a two-dimensional (2D) surface. These projections rely on visual perspective and aspect analysis to project a complex object for viewing capability on a simpler plane.
3D projections use the primary qualities of an object's basic shape to create a map of points, that are then connected to one another to create a visual element. The result is a graphic that contains conceptual properties to interpret the figure or image as not actually flat (2D), but rather, as a solid object (3D) being viewed on a 2D display.
3D objects are largely displayed on two-dimensional mediums (such as paper and computer monitors). As such, graphical projections are a commonly used design element; notably, in engineering drawing, drafting, and computer graphics. Projections can be calculated through employment of mathematical analysis and formulae, or by using various geometric and optical techniques.
Overview
Projection is achieved by the use of imaginary "projectors"; the projected, mental image becomes the technician's vision of the desired, finished picture. Methods provide a uniform imaging procedure among people trained in technical graphics (mechanical drawing, computer aided design, etc.). By following a method, the technician may produce the envisioned picture on a planar surface such as drawing paper.
There are two graphical projection categories, each with its own method:
parallel projection
perspective projection
Parallel projection
In parallel projection, the lines of sight from the object to the projection plane are parallel to each other. Thus, lines that are parallel in three-dimensional space remain parallel in the two-dimensional projected image. Parallel projection also corresponds to a perspective projection with an infinite focal length (the distance from a camera's lens and focal point), or "zoom".
Images drawn in parallel projection rely upon the technique of axonometry ("to measure along axes"), as described in Pohlke's theorem. In general, the resulting image is oblique (the rays are not perpendicular to the image plane); but in special cases the result is orthographic (the rays are perpendicular to the image plane). Axonometry should not be confused with axonometric projection, as in English literature the latter usually refers only to a specific class of pictorials (see below).
Orthographic projection
The orthographic projection is derived from the principles of descriptive geometry and is a two-dimensional representation of a three-dimensional object. It is a parallel projection (the lines of projection are parallel both in reality and in the projection plane). It is the projection type of choice for working drawings.
If the normal of the viewing plane (the camera direction) is parallel to one of the primary axes (which is the x, y, or z axis), the mathematical transformation is as follows;
To project the 3D point , , onto the 2D point , using an orthographic projection parallel to the y axis (where positive y represents forward direction - profile view), the following equations can be used:
where the vector s is an arbitrary scale factor, and c is an arbitrary offset. These constants are optional, and can be used to properly align the viewport. Using matrix multiplication, the equations become:
While orthographically projected images represent the three dimensional nature of the object projected, they do not represent the object as it would be recorded photographically or perceived by a viewer observing it directly. In particular, parallel lengths at all points in an orthographically projected image are of the same scale regardless of whether they are far away or near to the virtual viewer. As a result, lengths are not foreshortened as they would be in a perspective projection.
Multiview projection
With multiview projections, up to six pictures (called primary views) of an object are produced, with each projection plane parallel to one of the coordinate axes of the object. The views are positioned relative to each other according to either of two schemes: first-angle or third-angle projection. In each, the appearances of views may be thought of as being projected onto planes that form a 6-sided box around the object. Although six different sides can be drawn, usually three views of a drawing give enough information to make a 3D object. These views are known as front view, top view, and end view. The terms elevation, plan and section are also used.
Oblique projection
In oblique projections the parallel projection rays are not perpendicular to the viewing plane as with orthographic projection, but strike the projection plane at an angle other than ninety degrees. In both orthographic and oblique projection, parallel lines in space appear parallel on the projected image. Because of its simplicity, oblique projection is used exclusively for pictorial purposes rather than for formal, working drawings. In an oblique pictorial drawing, the displayed angles among the axes as well as the foreshortening factors (scale) are arbitrary. The distortion created thereby is usually attenuated by aligning one plane of the imaged object to be parallel with the plane of projection thereby creating a true shape, full-size image of the chosen plane. Special types of oblique projections are:
Cavalier projection (45°)
In cavalier projection (sometimes cavalier perspective or high view point) a point of the object is represented by three coordinates, x, y and z. On the drawing, it is represented by only two coordinates, x″ and y″. On the flat drawing, two axes, x and z on the figure, are perpendicular and the length on these axes are drawn with a 1:1 scale; it is thus similar to the dimetric projections, although it is not an axonometric projection, as the third axis, here y, is drawn in diagonal, making an arbitrary angle with the x″ axis, usually 30 or 45°. The length of the third axis is not scaled.
Cabinet projection
The term cabinet projection (sometimes cabinet perspective) stems from its use in illustrations by the furniture industry. Like cavalier perspective, one face of the projected object is parallel to the viewing plane, and the third axis is projected as going off in an angle (typically 30° or 45° or arctan(2) = 63.4°). Unlike cavalier projection, where the third axis keeps its length, with cabinet projection the length of the receding lines is cut in half.
Military projection
A variant of oblique projection is called military projection. In this case, the horizontal sections are isometrically drawn so that the floor plans are not distorted and the verticals are drawn at an angle. The military projection is given by rotation in the xy-plane and a vertical translation an amount z.
Axonometric projection
Axonometric projections show an image of an object as viewed from a skew direction in order to reveal all three directions (axes) of space in one picture. Axonometric projections may be either orthographic or oblique. Axonometric instrument drawings are often used to approximate graphical perspective projections, but there is attendant distortion in the approximation. Because pictorial projections innately contain this distortion, in instrument drawings of pictorials great liberties may then be taken for economy of effort and best effect.
Axonometric projection is further subdivided into three categories: isometric projection, dimetric projection, and trimetric projection, depending on the exact angle at which the view deviates from the orthogonal. A typical characteristic of orthographic pictorials is that one axis of space is usually displayed as vertical.
Isometric projection
In isometric pictorials (for methods, see Isometric projection), the direction of viewing is such that the three axes of space appear equally foreshortened, and there is a common angle of 120° between them. The distortion caused by foreshortening is uniform, therefore the proportionality of all sides and lengths are preserved, and the axes share a common scale. This enables measurements to be read or taken directly from the drawing.
Dimetric projection
In dimetric pictorials (for methods, see Dimetric projection), the direction of viewing is such that two of the three axes of space appear equally foreshortened, of which the attendant scale and angles of presentation are determined according to the angle of viewing; the scale of the third direction (vertical) is determined separately. Approximations are common in dimetric drawings.
Trimetric projection
In trimetric pictorials (for methods, see Trimetric projection), the direction of viewing is such that all of the three axes of space appear unequally foreshortened. The scale along each of the three axes and the angles among them are determined separately as dictated by the angle of viewing. Approximations in Trimetric drawings are common.
Limitations of parallel projection
Objects drawn with parallel projection do not appear larger or smaller as they extend closer to or away from the viewer. While advantageous for architectural drawings, where measurements must be taken directly from the image, the result is a perceived distortion, since unlike perspective projection, this is not how our eyes or photography normally work. It also can easily result in situations where depth and altitude are difficult to gauge, as is shown in the illustration to the right.
In this isometric drawing, the blue sphere is two units higher than the red one. However, this difference in elevation is not apparent if one covers the right half of the picture, as the boxes (which serve as clues suggesting height) are then obscured.
This visual ambiguity has been exploited in op art, as well as "impossible object" drawings. M. C. Escher's Waterfall (1961), while not strictly utilizing parallel projection, is a well-known example, in which a channel of water seems to travel unaided along a downward path, only to then paradoxically fall once again as it returns to its source. The water thus appears to disobey the law of conservation of energy. An extreme example is depicted in the film Inception, where by a forced perspective trick an immobile stairway changes its connectivity. The video game Fez uses tricks of perspective to determine where a player can and cannot move in a puzzle-like fashion.
Perspective projection
Perspective projection or perspective transformation is a projection where three-dimensional objects are projected on a picture plane. This has the effect that distant objects appear smaller than nearer objects.
It also means that lines which are parallel in nature (that is, meet at the point at infinity) appear to intersect in the projected image. For example, if railways are pictured with perspective projection, they appear to converge towards a single point, called the vanishing point. Photographic lenses and the human eye work in the same way, therefore the perspective projection looks the most realistic. Perspective projection is usually categorized into one-point, two-point and three-point perspective, depending on the orientation of the projection plane towards the axes of the depicted object.
Graphical projection methods rely on the duality between lines and points, whereby two straight lines determine a point while two points determine a straight line. The orthogonal projection of the eye point onto the picture plane is called the principal vanishing point (P.P. in the scheme on the right, from the Italian term punto principale, coined during the renaissance).
Two relevant points of a line are:
its intersection with the picture plane, and
its vanishing point, found at the intersection between the parallel line from the eye point and the picture plane.
The principal vanishing point is the vanishing point of all horizontal lines perpendicular to the picture plane. The vanishing points of all horizontal lines lie on the horizon line. If, as is often the case, the picture plane is vertical, all vertical lines are drawn vertically, and have no finite vanishing point on the picture plane. Various graphical methods can be easily envisaged for projecting geometrical scenes. For example, lines traced from the eye point at 45° to the picture plane intersect the latter along a circle whose radius is the distance of the eye point from the plane, thus tracing that circle aids the construction of all the vanishing points of 45° lines; in particular, the intersection of that circle with the horizon line consists of two distance points. They are useful for drawing chessboard floors which, in turn, serve for locating the base of objects on the scene. In the perspective of a geometric solid on the right, after choosing the principal vanishing point —which determines the horizon line— the 45° vanishing point on the left side of the drawing completes the characterization of the (equally distant) point of view. Two lines are drawn from the orthogonal projection of each vertex, one at 45° and one at 90° to the picture plane. After intersecting the ground line, those lines go toward the distance point (for 45°) or the principal point (for 90°). Their new intersection locates the projection of the map. Natural heights are measured above the ground line and then projected in the same way until they meet the vertical from the map.
While orthographic projection ignores perspective to allow accurate measurements, perspective projection shows distant objects as smaller to provide additional realism.
Mathematical formula
The perspective projection requires a more involved definition as compared to orthographic projections. A conceptual aid to understanding the mechanics of this projection is to imagine the 2D projection as though the object(s) are being viewed through a camera viewfinder. The camera's position, orientation, and field of view control the behavior of the projection transformation. The following variables are defined to describe this transformation:
– the 3D position of a point A that is to be projected.
– the 3D position of a point C representing the camera.
– The orientation of the camera (represented by Tait–Bryan angles).
– the display surface's position relative to aforementioned .
Most conventions use positive z values (the plane being in front of the pinhole ), however negative z values are physically more correct, but the image will be inverted both horizontally and vertically.
Which results in:
– the 2D projection of
When and the 3D vector is projected to the 2D vector .
Otherwise, to compute we first define a vector as the position of point A with respect to a coordinate system defined by the camera, with origin in C and rotated by with respect to the initial coordinate system. This is achieved by subtracting from and then applying a rotation by to the result. This transformation is often called a , and can be expressed as follows, expressing the rotation in terms of rotations about the x, y, and z axes (these calculations assume that the axes are ordered as a left-handed system of axes):
This representation corresponds to rotating by three Euler angles (more properly, Tait–Bryan angles), using the xyz convention, which can be interpreted either as "rotate about the extrinsic axes (axes of the scene) in the order z, y, x (reading right-to-left)" or "rotate about the intrinsic axes (axes of the camera) in the order x, y, z (reading left-to-right)". If the camera is not rotated (), then the matrices drop out (as identities), and this reduces to simply a shift:
Alternatively, without using matrices (let us replace with and so on, and abbreviate to and to ):
This transformed point can then be projected onto the 2D plane using the formula (here, x/y is used as the projection plane; literature also may use x/z):
Or, in matrix form using homogeneous coordinates, the system
in conjunction with an argument using similar triangles, leads to division by the homogeneous coordinate, giving
The distance of the viewer from the display surface, , directly relates to the field of view, where is the viewed angle. (Note: This assumes that you map the points (-1,-1) and (1,1) to the corners of your viewing surface)
The above equations can also be rewritten as:
In which is the display size, is the recording surface size (CCD or Photographic film), is the distance from the recording surface to the entrance pupil (camera center), and is the distance, from the 3D point being projected, to the entrance pupil.
Subsequent clipping and scaling operations may be necessary to map the 2D plane onto any particular display media.
Weak perspective projection
A "weak" perspective projection uses the same principles of an orthographic projection, but requires the scaling factor to be specified, thus ensuring that closer objects appear bigger in the projection, and vice versa. It can be seen as a hybrid between an orthographic and a perspective projection, and described either as a perspective projection with individual point depths replaced by an average constant depth , or simply as an orthographic projection plus a scaling.
The weak-perspective model thus approximates perspective projection while using a simpler model, similar to the pure (unscaled) orthographic perspective.
It is a reasonable approximation when the depth of the object along the line of sight is small compared to the distance from the camera, and the field of view is small. With these conditions, it can be assumed that all points on a 3D object are at the same distance from the camera without significant errors in the projection (compared to the full perspective model).
Equation
assuming focal length .
Diagram
To determine which screen x-coordinate corresponds to a point at multiply the point coordinates by:
where
is the screen x coordinate
is the model x coordinate
is the focal length—the axial distance from the camera center to the image plane
is the subject distance.
Because the camera is in 3D, the same works for the screen y-coordinate, substituting y for x in the above diagram and equation.
Alternatively, one could use clipping techniques, replacing the variables with values of the point that are out of the FOV-angle and the point inside Camera Matrix.
This technique, also known as "Inverse Camera", is a Perspective Projection Calculus with known values to calculate the last point on visible angle, projecting from the invisible point, after all needed transformations finished.
See also
3D computer graphics
Camera matrix
Computer graphics
Cross section (geometry)
Cross-sectional view
Curvilinear perspective
Cutaway drawing
Descriptive geometry
Engineering drawing
Exploded-view drawing
Homogeneous coordinates
Homography
Map projection (including Cylindrical projection)
Multiview projection
Perspective (graphical)
Plan (drawing)
Technical drawing
Tesseract
Texture mapping
Transform, clipping, and lighting
Video card
Viewing frustum
Virtual globe
References
Further reading
External links
Creating 3D Environments from Digital Photographs
3D computer graphics
3D imaging
Display devices
Euclidean solid geometry
Functions and mappings
Graphical projections
Linear algebra
Projective geometry | 3D projection | [
"Physics",
"Mathematics",
"Engineering"
] | 3,753 | [
"Functions and mappings",
"Graphical projections",
"Mathematical analysis",
"Euclidean solid geometry",
"Mathematical objects",
"Space",
"Mathematical relations",
"Human–machine interaction",
"Display devices",
"Linear algebra",
"Spacetime",
"Algebra"
] |
313,779 | https://en.wikipedia.org/wiki/Quantum%20chaos | Quantum chaos is a branch of physics focused on how chaotic classical dynamical systems can be described in terms of quantum theory. The primary question that quantum chaos seeks to answer is: "What is the relationship between quantum mechanics and classical chaos?" The correspondence principle states that classical mechanics is the classical limit of quantum mechanics, specifically in the limit as the ratio of the Planck constant to the action of the system tends to zero. If this is true, then there must be quantum mechanisms underlying classical chaos (although this may not be a fruitful way of examining classical chaos). If quantum mechanics does not demonstrate an exponential sensitivity to initial conditions, how can exponential sensitivity to initial conditions arise in classical chaos, which must be the correspondence principle limit of quantum mechanics?
In seeking to address the basic question of quantum chaos, several approaches have been employed:
Development of methods for solving quantum problems where the perturbation cannot be considered small in perturbation theory and where quantum numbers are large.
Correlating statistical descriptions of eigenvalues (energy levels) with the classical behavior of the same Hamiltonian (system).
Study of probability distribution of individual eigenstates (see scars and quantum ergodicity).
Semiclassical methods such as periodic-orbit theory connecting the classical trajectories of the dynamical system with quantum features.
Direct application of the correspondence principle.
History
During the first half of the twentieth century, chaotic behavior in mechanics was recognized (as in the three-body problem in celestial mechanics), but not well understood. The foundations of modern quantum mechanics were laid in that period, essentially leaving aside the issue of the quantum-classical correspondence in systems whose classical limit exhibit chaos.
Approaches
Questions related to the correspondence principle arise in many different branches of physics, ranging from nuclear to atomic, molecular and solid-state physics, and even to acoustics, microwaves and optics. However, classical-quantum correspondence in chaos theory is not always possible. Thus, some versions of the classical butterfly effect do not have counterparts in quantum mechanics.
Important observations often associated with classically chaotic quantum systems are spectral level repulsion, dynamical localization in time evolution (e.g. ionization rates of atoms), and enhanced stationary wave intensities in regions of space where classical dynamics exhibits only unstable trajectories (as in scattering). In the semiclassical approach of quantum chaos, phenomena are identified in spectroscopy by analyzing the statistical distribution of spectral lines and by connecting spectral periodicities with classical orbits. Other phenomena show up in the time evolution of a quantum system, or in its response to various types of external forces. In some contexts, such as acoustics or microwaves, wave patterns are directly observable and exhibit irregular amplitude distributions.
Quantum chaos typically deals with systems whose properties need to be calculated using either numerical techniques or approximation schemes (see e.g. Dyson series). Simple and exact solutions are precluded by the fact that the system's constituents either influence each other in a complex way, or depend on temporally varying external forces.
Quantum mechanics in non-perturbative regimes
For conservative systems, the goal of quantum mechanics in non-perturbative regimes is to find
the eigenvalues and eigenvectors of a Hamiltonian of the form
where is separable in some coordinate system, is non-separable in the coordinate system in which is separated, and is a parameter which cannot be considered small. Physicists have historically approached problems of this nature by trying to find the coordinate system in which the non-separable Hamiltonian is smallest and then treating the non-separable Hamiltonian as a perturbation.
Finding constants of motion so that this separation can be performed can be a difficult (sometimes impossible) analytical task. Solving the classical problem can give valuable insight into solving the quantum problem. If there are regular classical solutions of
the same Hamiltonian, then there are (at least) approximate constants of motion, and by solving the classical problem, we gain clues how to find them.
Other approaches have been developed in recent years. One is to express the Hamiltonian in
different coordinate systems in different regions of space, minimizing the non-separable part of the Hamiltonian in each region. Wavefunctions are obtained in these regions, and eigenvalues are obtained by matching boundary conditions.
Another approach is numerical matrix diagonalization. If the Hamiltonian matrix is computed in any complete basis, eigenvalues and eigenvectors are obtained by diagonalizing the matrix. However, all complete basis sets are infinite, and we need to truncate the basis and still obtain accurate results. These techniques boil down to choosing a truncated basis from which accurate wavefunctions can be constructed. The computational time required to diagonalize a matrix scales as , where is the dimension of the matrix, so it is important to choose the smallest basis possible from which the relevant wavefunctions can be constructed. It is also convenient to choose a basis in which the matrix is sparse and/or the matrix elements are given by simple algebraic expressions because computing matrix elements can also be a computational burden.
A given Hamiltonian shares the same constants of motion for both classical and quantum dynamics. Quantum systems can also have additional quantum numbers corresponding to discrete symmetries (such as parity conservation from reflection symmetry). However, if we merely find quantum solutions of a Hamiltonian which is not approachable by perturbation theory, we may learn a great deal about quantum solutions, but we have learned little about quantum chaos. Nevertheless, learning how to solve such quantum problems is an important part of answering the question of quantum chaos.
Correlating statistical descriptions of quantum mechanics with classical behaviour
Statistical measures of quantum chaos were born out of a desire to quantify spectral features of complex systems. Random matrix theory was developed in an attempt to characterize spectra of complex nuclei. The remarkable result is that the statistical properties of many systems with unknown Hamiltonians can be predicted using random matrices of the proper symmetry class. Furthermore, random matrix theory also correctly predicts statistical properties of the eigenvalues of many chaotic systems with known Hamiltonians. This makes it useful as a tool for characterizing spectra which require large numerical efforts to compute.
A number of statistical measures are available for quantifying spectral features in a simple way. It is of great interest whether or not there are universal statistical behaviors of classically chaotic systems. The statistical tests mentioned here are universal, at least to systems with few degrees of freedom (Berry and Tabor have put forward strong arguments for a Poisson distribution in the case of regular motion and Heusler et al. present a semiclassical explanation of the so-called Bohigas–Giannoni–Schmit conjecture which asserts universality of spectral fluctuations in chaotic dynamics). The nearest-neighbor distribution (NND) of energy levels is relatively simple to interpret and it has been widely used to describe quantum chaos.
Qualitative observations of level repulsions can be quantified and related to the classical dynamics using the NND, which is believed to be an important signature of classical dynamics in quantum systems. It is thought that regular classical dynamics is manifested by a Poisson distribution of energy levels:
In addition, systems which display chaotic classical motion are expected to be characterized by the statistics of random matrix eigenvalue ensembles. For systems invariant under time reversal, the energy-level statistics of a number of chaotic systems have been shown to be in good agreement with the predictions of the Gaussian orthogonal ensemble (GOE) of random matrices, and it has been suggested that this phenomenon is generic for all chaotic systems with this symmetry. If the normalized spacing between two energy levels is , the normalized distribution of spacings is well approximated by
Many Hamiltonian systems which are classically integrable (non-chaotic) have been found to have quantum solutions that yield nearest neighbor distributions which follow the Poisson distributions. Similarly, many systems which exhibit classical chaos have been found with quantum solutions yielding a Wigner-Dyson distribution, thus supporting the ideas above. One notable exception is diamagnetic lithium which, though exhibiting classical chaos, demonstrates Wigner (chaotic) statistics for the even-parity energy levels and nearly Poisson (regular) statistics for the odd-parity energy level distribution.
Semiclassical methods
Periodic orbit theory
Periodic-orbit theory gives a recipe for computing spectra from the periodic orbits of a system. In contrast to the Einstein–Brillouin–Keller method of action quantization, which applies only to integrable or near-integrable systems and computes individual eigenvalues from each trajectory, periodic-orbit theory is applicable to both integrable and non-integrable systems and asserts that each periodic orbit produces a sinusoidal fluctuation in the density of states.
The principal result of this development is an expression for the density of states which is the trace of the semiclassical Green's function and is given by the Gutzwiller trace formula:
Recently there was a generalization of this formula for arbitrary matrix Hamiltonians that involves a Berry phase-like term stemming from spin or other internal degrees of freedom. The index distinguishes the primitive periodic orbits: the shortest period orbits of a given set of initial conditions. is the period of the primitive periodic orbit and is its classical action. Each primitive orbit retraces itself, leading to a new orbit with action and a period which is an integral multiple of the primitive period. Hence, every repetition of a periodic orbit is another periodic orbit. These repetitions are separately classified by the intermediate sum over the indices . is the orbit's Maslov index.
The amplitude factor, , represents the square root of the density of neighboring orbits. Neighboring trajectories of an unstable periodic orbit diverge exponentially in time from the periodic orbit. The quantity characterizes the instability of the orbit. A stable orbit moves on a torus in phase space, and neighboring trajectories wind around it. For stable orbits, becomes , where is the winding number of the periodic orbit. , where is the number of times that neighboring orbits intersect the periodic orbit in one period. This presents a difficulty because at a classical bifurcation. This causes that orbit's contribution to the energy density to diverge. This also occurs in the context of photo-absorption spectrum.
Using the trace formula to compute a spectrum requires summing over all of the periodic orbits of a system. This presents several difficulties for chaotic systems: 1) The number of periodic orbits proliferates exponentially as a function of action. 2) There are an infinite number of periodic orbits, and the convergence properties of periodic-orbit theory are unknown. This difficulty is also present when applying periodic-orbit theory to regular systems. 3) Long-period orbits are difficult to compute because most trajectories are unstable and sensitive to roundoff errors and details of the numerical integration.
Gutzwiller applied the trace formula to approach the anisotropic Kepler problem (a single particle in a potential with an anisotropic mass tensor) semiclassically. He found agreement with quantum computations for low lying (up to ) states for small anisotropies by using only a small set of easily computed periodic orbits, but the agreement was poor for large anisotropies.
The figures above use an inverted approach to testing periodic-orbit theory. The trace formula asserts that each periodic orbit contributes a sinusoidal term to the spectrum. Rather than dealing with the computational difficulties surrounding long-period orbits to try to find the density of states (energy levels), one can use standard quantum mechanical perturbation theory to compute eigenvalues (energy levels) and use the Fourier transform to look for the periodic modulations of the spectrum which are the signature of periodic orbits. Interpreting the spectrum then amounts to finding the orbits which correspond to peaks in the Fourier transform.
Rough sketch on how to arrive at the Gutzwiller trace formula
Start with the semiclassical approximation of the time-dependent Green's function (the Van Vleck propagator).
Realize that for caustics the description diverges and use the insight by Maslov (approximately Fourier transforming to momentum space (stationary phase approximation with h a small parameter) to avoid such points and afterwards transforming back to position space can cure such a divergence, however gives a phase factor).
Transform the Greens function to energy space to get the energy dependent Greens function (again approximate Fourier transform using the stationary phase approximation). New divergences might pop up that need to be cured using the same method as step 3
Use (tracing over positions) and calculate it again in stationary phase approximation to get an approximation for the density of states .
Note: Taking the trace tells you that only closed orbits contribute, the stationary phase approximation gives you restrictive conditions each time you make it. In step 4 it restricts you to orbits where initial and final momentum are the same i.e. periodic orbits. Often it is nice to choose a coordinate system parallel to the direction of movement, as it is done in many books.
Closed orbit theory
Closed-orbit theory was developed by J.B. Delos, M.L. Du, J. Gao, and J. Shaw. It is similar to
periodic-orbit theory, except that closed-orbit theory is applicable only to atomic and molecular spectra and yields the oscillator strength density (observable photo-absorption spectrum) from a specified initial state whereas periodic-orbit theory yields the density of states.
Only orbits that begin and end at the nucleus are important in closed-orbit theory. Physically, these are associated with the outgoing waves that are generated when a tightly bound electron is excited to a high-lying state. For Rydberg atoms and molecules, every orbit which is closed at the nucleus is also a periodic orbit whose period is equal to either the closure time or twice the closure time.
According to closed-orbit theory, the average oscillator strength density at constant is given by a smooth background plus an oscillatory sum of the form
is a phase that depends on the Maslov index and other details of the orbits. is the recurrence amplitude of a closed orbit for a given initial state (labeled ). It contains information about the stability of the orbit, its initial and final directions, and the matrix element of the dipole operator between the initial state and a zero-energy Coulomb wave. For scaling systems such as Rydberg atoms in strong fields, the Fourier transform of an oscillator strength spectrum computed at fixed as a function of is called a recurrence spectrum, because it gives peaks which correspond to the scaled action of closed orbits and whose heights correspond to .
Closed-orbit theory has found broad agreement with a number of chaotic systems, including diamagnetic hydrogen, hydrogen in parallel electric and magnetic fields, diamagnetic lithium, lithium in an electric field, the ion in crossed and parallel electric and magnetic fields, barium in an electric field, and helium in an electric field.
One-dimensional systems and potential
For the case of one-dimensional system with the boundary condition the density of states obtained from the Gutzwiller formula is related to the inverse of the potential of the classical system by here is the density of states and V(x) is the classical potential of the particle, the half derivative of the inverse of the potential is related to the density of states as in the Wu–Sprung potential.
Recent directions
One open question remains understanding quantum chaos in systems that have finite-dimensional local Hilbert spaces for which standard semiclassical limits do not apply. Recent works allowed for studying analytically such quantum many-body systems.
The traditional topics in quantum chaos concerns spectral statistics (universal and non-universal features), and the study of eigenfunctions of various chaotic Hamiltonian. For example, before the existence of scars was reported, eigenstates of a classically chaotic system were conjectured to fill the available phase space evenly, up to random fluctuations and energy conservation (Quantum ergodicity). However, a quantum eigenstate of a classically chaotic system can be scarred: the probability density of the eigenstate is enhanced in the neighborhood of a periodic orbit, above the classical, statistically expected density along the orbit (scars). In particular, scars are both a striking visual example of classical-quantum correspondence away from the usual classical limit, and a useful example of a quantum suppression of chaos. For example, this is evident in the perturbation-induced quantum scarring: More specifically, in quantum dots perturbed by local potential bumps (impurities), some of the eigenstates are strongly scarred along periodic orbits of unperturbed classical counterpart.
Further studies concern the parametric () dependence of the Hamiltonian, as reflected in e.g. the statistics of avoided crossings, and the associated mixing as reflected in the (parametric) local density of states (LDOS). There is vast literature on wavepacket dynamics, including the study of fluctuations, recurrences, quantum irreversibility issues etc. Special place is reserved to the study of the dynamics of quantized maps: the standard map and the kicked rotator are considered to be prototype problems.
Works are also focused in the study of driven chaotic systems, where the Hamiltonian is time dependent, in particular in the adiabatic and in the linear response regimes. There is also significant effort focused on formulating ideas of quantum chaos for strongly-interacting many-body quantum systems far from semi-classical regimes as well as a large effort in quantum chaotic scattering.
Berry–Tabor conjecture
In 1977, Berry and Tabor made a still open "generic" mathematical conjecture which, stated roughly, is: In the "generic" case for the quantum dynamics of a geodesic flow on a compact Riemann surface, the quantum energy eigenvalues behave like a sequence of independent random variables provided that the underlying classical dynamics is completely integrable.
See also
Scar (physics)
Statistical mechanics
References
Further resources
External links
Quantum Chaos by Martin Gutzwiller (1992 and 2008, Scientific American)
Quantum Chaos Martin Gutzwiller Scholarpedia 2(12):3146. doi:10.4249/scholarpedia.3146
Category:Quantum Chaos Scholarpedia
What is... Quantum Chaos by Ze'ev Rudnick (January 2008, Notices of the American Mathematical Society)
Brian Hayes, "The Spectrum of Riemannium"; American Scientist Volume 91, Number 4, July–August, 2003 pp. 296–300. Discusses relation to the Riemann zeta function.
Eigenfunctions in chaotic quantum systems by Arnd Bäcker.
ChaosBook.org
Chaos theory
Quantum mechanics
Quantum chaos theory | Quantum chaos | [
"Physics"
] | 3,845 | [
"Theoretical physics",
"Quantum mechanics"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.