id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
42,021,809 | https://en.wikipedia.org/wiki/Samsung%20Gear%202 | The Samsung Gear 2 and Samsung Gear 2 Neo are smartwatches produced by Samsung Electronics. Unveiled on February 22, 2014 at Mobile World Congress, the Gear 2 line is a successor to the Samsung Galaxy Gear.
In comparison to the Galaxy Gear, the most significant change made to the Gear 2 line was the replacement of Android with the Samsung-developed Tizen operating system, promising improved functionality (such as Samsung's S Health software and an integrated music player) and battery life. The design of the device itself was also refreshed with the move of its camera from the watchband to the watch itself (allowing users to replace their own bands), along with the addition of an infrared blaster and optical heart rate sensor.
Its successor, the Samsung Gear S, was released on November 7, 2014.
Specifications
Hardware
The Gear 2 retains a similar hardware design to the original Galaxy Gear, although a Home button has been added below the screen, and the device's 2-megapixel camera was moved from the strap to the top of the watch itself, alongside a newly added infrared blaster. This particular design change allows the strap to be user-replaceable. Two models of the Gear 2 were released, the Gear 2 and Gear 2 Neo; the Gear 2 has a steel exterior and includes a camera, while the Gear 2 Neo is made from plastic and excludes the camera. They are otherwise identical.
The device's processor was upgraded to a 1.0 GHz dual-core Exynos 3250 system-on-chip. As with the Galaxy Gear, the Gear 2 has a 1.63-inch, 320 pixel-wide square-shaped Super AMOLED touchscreen, 512 MB of RAM, and 4 GB of internal storage. An optical heart rate monitor is located on the bottom of the device. Despite having a smaller, 300 mAh battery, the Gear 2 has increased battery life over its predecessor, with Samsung rating it for 2–3 days of normal use. As with the previous model, the device itself does not contain a charging port and must be placed inside a special Micro USB-equipped charging case.
Software
Unlike the original Galaxy Gear, which ran Android, the Gear 2 runs Tizen, a Linux-based operating system co-developed by Samsung. The Gear 2 uses a similar user interface to the Galaxy Gear, allowing users to synchronize notifications from a host device and display them the Gear's screen when received, use Smart Relay to automatically open the relevant app for the notification on their smartphone or tablet, use S Voice for dictation and voice commands, place and answer phone calls, and locate the host phone or tablet, or vice versa with the "Find My Device" tool. Notable new apps added on the Gear 2 include the fitness app S Health, WatchOn—a remote control app which integrates with its infrared blaster, and an integrated music player which can store songs on the device's internal storage. As with the Galaxy Gear, an updated Gear Manager app is installed on the Galaxy device to coordinate communications, customize the watch and download apps from Samsung Apps. The Gear 2's Gear Manager adds the ability to customize which apps the watch displays notifications from, upload a custom wallpaper, and perform backups.
Reception
The Gear 2 and Gear 2 Neo received mixed reviews. TechRadar considered the device to be a "much, much better attempt at making the smartwatch more relevant", with particular praise towards the streamlined design and ability to use custom watch straps, and its improved battery life, but panned the high cost of the device and its "convoluted" user interface.
References
External links
Samsung wearable devices
Products introduced in 2014
Smartwatches
Tizen-based devices | Samsung Gear 2 | Technology | 766 |
35,477,085 | https://en.wikipedia.org/wiki/Belinski%E2%80%93Zakharov%20transform | The Belinski–Zakharov (inverse) transform is a nonlinear transformation that generates new exact solutions of the vacuum Einstein's field equation. It was developed by Vladimir Belinski and Vladimir Zakharov in 1978. The Belinski–Zakharov transform is a generalization of the inverse scattering transform. The solutions produced by this transform are called gravitational solitons (gravisolitons). Despite the term 'soliton' being used to describe gravitational solitons, their behavior is very different from other (classical) solitons. In particular, gravitational solitons do not preserve their amplitude and shape in time, and up to June 2012 their general interpretation remains unknown. What is known however, is that most black holes (and particularly the Schwarzschild metric and the Kerr metric) are special cases of gravitational solitons.
Introduction
The Belinski–Zakharov transform works for spacetime intervals of the form
where we use the Einstein summation convention for . It is assumed that both the function and the matrix depend on the coordinates and only. Despite being a specific form of the spacetime interval that depends only on two variables, it includes a great number of interesting solutions as special cases, such as the Schwarzschild metric, the Kerr metric, Einstein–Rosen metric, and many others.
In this case, Einstein's vacuum equation decomposes into two sets of equations for the matrix and the function . Using light-cone coordinates , the first equation for the matrix is
where is the square root of the determinant of , namely
The second set of equations is
Taking the trace of the matrix equation for reveals that in fact satisfies the wave equation
Lax pair
Consider the linear operators defined by
where is an auxiliary complex spectral parameter.
A simple computation shows that since satisfies the wave equation, . This pair of operators commute, this is the Lax pair.
The gist behind the inverse scattering transform is rewriting the nonlinear Einstein equation as an overdetermined linear system of equation for a new matrix function . Consider the Belinski–Zakharov equations:
By operating on the left-hand side of the first equation with and on the left-hand side of the second equation with and subtracting the results, the left-hand side vanishes as a result of the commutativity of and . As for the right-hand side, a short computation shows that indeed it vanishes as well precisely when satisfies the nonlinear matrix Einstein equation.
This means that the overdetermined linear Belinski–Zakharov equations are solvable simultaneously exactly when solves the nonlinear matrix equation. One can easily restore from the matrix-valued function by a simple limiting process. Taking the limit in the Belinski–Zakharov equations and multiplying by from the right gives
Thus a solution of the nonlinear equation is obtained from a solution of the linear Belinski–Zakharov equation by a simple evaluation
References
Exact solutions in general relativity | Belinski–Zakharov transform | Mathematics | 608 |
24,450,278 | https://en.wikipedia.org/wiki/Gerhart%20Jander | Gerhart Jander (26 October 1892 – 8 December 1961) was a German inorganic chemist. His book, now normally only called "Jander-Blasius", on analytical chemistry is still used in German universities. His involvement in the chemical weapon research and close relation to the NSDAP have been uncovered by recent research.
Life and work
Jander was born in Altdöbern, Oberspreewald-Lausitz. Jander studied in Technical University of Munich, and at University of Berlin where he received his Ph.D for work with Arthur Rosenheim in 1917. He joined Richard Zsigmondy at the University of Göttingen. He became professor in 1925 and after a two-year period being a temporary director of the Kaiser Wilhelm Institute for Physical Chemistry and Elektrochemistry from 1933 till 1935 he became professor for inorganic chemistry at the University of Greifswald. In 1951 he changed to Technische Universität Berlin. Jander died in Berlin in 1961.
Critical reviews
The involvement of Jander in the research on chemical warfare and his influence on the Kaiser Wilhelm Institute for Physical Chemistry and Elektrochemistry after he followed Fritz Haber as director, who was forced to resign due to the Law for the Restoration of the Professional Civil Service, have been a point of research of the Max Planck Society (The Max Planck Society is the successor organisation of the Kaiser Wilhelm Society).
References
1892 births
1961 deaths
People from Altdöbern
Scientists from the Province of Brandenburg
20th-century German chemists
Inorganic chemists
Technical University of Munich alumni
Humboldt University of Berlin alumni
Academic staff of the University of Göttingen
Academic staff of the University of Greifswald
Academic staff of Technische Universität Berlin
Max Planck Institute directors | Gerhart Jander | Chemistry | 360 |
21,630,086 | https://en.wikipedia.org/wiki/Integrated%20High%20Performance%20Turbine%20Engine%20Technology | The Integrated High Performance Turbine Engine Technology program was a project of the United States military, DARPA, and NASA. Its objective was to conduct science and technology research that would secure advancements in the engineering of the gas turbine engines used in military aircraft. It ran from 1987 until 2005.
IHPTET designated goals in each of three engine classes: turbofan/turbojet, turboprop/turboshaft, and expendable engines. For the turbofan class the primary goal was to double the engine thrust-to-weight ratio.
The program made many significant developments which have been employed in such aircraft as the F-35 / Joint Strike Fighter. It was firmly regarded as successful although it did not fully achieve its explicit goals. It was succeeded by the Versatile Affordable Advanced Turbine Engines (VAATE) program.
See also
Advanced Affordable Turbine Engine (AATE)
Adaptive Versatile Engine Technology (ADVENT)
References
External links
Further reading
Aircraft engines
Jet engines | Integrated High Performance Turbine Engine Technology | Technology | 191 |
12,194,878 | https://en.wikipedia.org/wiki/C3H6O3 | {{DISPLAYTITLE:C3H6O3}}
The molecular formula C3H6O3 may refer to:
Dihydroxyacetone
Dimethyl carbonate
Glyceraldehyde
3-Hydroxypropionic acid
Lactic acid
Trioxanes
1,2,4-Trioxane
1,3,5-Trioxane | C3H6O3 | Chemistry | 77 |
34,794,178 | https://en.wikipedia.org/wiki/Derek%20Corneil | Derek Gordon Corneil is a Canadian mathematician and computer scientist, a professor emeritus of computer science at the University of Toronto, and an expert in graph algorithms and graph theory.
Life
When he was leaving high school, Corneil was told by his English teacher that doing a degree in mathematics and physics was a bad idea, and that the best he could hope for was to go to a technical college. His interest in computer science began when, as an undergraduate student at Queens College, he heard that a computer was purchased by the London Life insurance company in London, Ontario, where his father worked. As a freshman, he took a summer job operating the UNIVAC Mark II at the company. One of his main responsibilities was to operate a printer. An opportunity for a programming job with the company sponsoring his college scholarship appeared soon after. It was a chance that Corneil jumped at after being denied a similar position at London Life. There was an initial mix-up at his job as his overseer thought that he knew how to program the UNIVAC Mark II, and so he would easily transition to doing the same for the company's newly acquired IBM 1401 machine. However, Corneil did not have the assumed programming background. Thus, in the two-week window that Corneil had been given to learn how to grasp programming the IBM 1401, he learned how to write code from scratch by relying heavily on the instruction manual. This experience pushed him further on his way as did a number of projects he worked on in that position later on.
Corneil went on to earn a bachelor's degree in mathematics and physics from Queen's University in 1964. Initially he had planned to do his graduate studies before becoming a high school teacher, but his acceptance into the brand new graduate program in computer science at the University of Toronto changed that. At the University of Toronto, Corneil earned a master's degree and then in 1968 a doctorate in computer science under the supervision of Calvin Gotlieb. (His post-doctoral supervisor was Jaap Seidel.) It was during this time that Corneil became interested in graph theory. He and Gotlieb eventually became good friends. After postdoctoral studies at the Eindhoven University of Technology, Corneil returned to Toronto as a faculty member in 1970. Before his retirement in 2010, Corneil held many positions at the University of Toronto, including Department Chair of the Computer Science department (July 1985 to June 1990), Director of Research Initiatives of the Faculty of Arts and Science (July 1991 to March 1998), and Acting Vice President of Research and International Relations (September to December 1993). During his time as a professor, he was also a visiting professor at universities such as the University of British Columbia, Simon Fraser University, the Université de Grenoble and the Université de Montpellier.
Work
Corneil did his research in algorithmic graph theory and graph theory in general. He has overseen 49 theses and published over 100 papers on his own or with co-authors. These papers include:
A proof that recognizing graphs of small treewidth is NP-complete,
The discovery of the cotree representation for cographs and of fast recognition algorithms for cographs,
Generating algorithms for graph isomorphism.
Algorithmic and structural properties of complement reducible graphs.
Properties of asteroidal triple-free graphs.
An algorithm to solve the problem of determining whether a graph is a partial graph of a k-tree.
Results addressing graph theoretic, algorithmic, and complexity issues with regard to tree spanners.
An explanation of the relationship between tree width and clique-width.
Determining the diameter of restricted graph families.
Outlining the structure of trapezoid graphs.
As a professor emeritus, Corneil still does research and is also an editor of several publications such as Ars Combinatoria and SIAM Monographs on Discrete Mathematics and Applications.
Awards
He was inducted as a Fields Institute Fellow in 2004.
References
External links
Interview with Corneil, Stephen Ibaraki, 13 June 2011
List of publications at DBLP
1942 births
Living people
Canadian mathematicians
Canadian computer scientists
Graph theorists
Queen's University at Kingston alumni
University of Toronto alumni
Academic staff of the University of Toronto | Derek Corneil | Mathematics | 846 |
4,525,060 | https://en.wikipedia.org/wiki/Porcine%20circovirus | Porcine circovirus (PCV) is a group of four single-stranded DNA viruses that are non-enveloped with an unsegmented circular genome. They are members of the genus Circovirus that can infect pigs. The viral capsid is icosahedral and approximately 17 nm in diameter.
PCVs are the smallest viruses replicating autonomously in eukaryotic cells. They replicate in the nucleus of infected cells, using the host polymerase for genome amplification.
PCV-2 causes Porcine circovirus associated disease or postweaning multisystemic wasting syndrome (PMWS). An effective vaccination is now available. Fort Dodge Animal Health (Wyeth) launched the first USDA approved vaccine in 2006, containing an inactivated virus (ATCvet code: ).
Classification
Three strains of PCV are known as of 2018:
PCV-1 (first identified in 1974) readily infects, but is not known to cause disease in swine.
PCV-2 (first isolated in 1997) causes PMWS, which over time results in significant depletion of lymphocytes; postmortem examination of diseased animals reveals enlarged lymph nodes and abnormal lung tissue. However, viral infection by itself tends to cause only mild disease, and co-factors such as other infections or immunostimulation seem necessary for development of severe disease. For example, concurrent infection with porcine parvovirus or PRRS virus, or immunostimulation lead to increased replication of PCV-2 and more severe disease in PCV-2-infected pigs.
PCV-3 (first described in 2015) causes a wide range of problems, and may be widespread among pigs.
PCV-1 and PCV-2 show a high degree of sequence identity and a similar genomic organisation; nevertheless, the basis of the distinct pathogenicity has not yet been unravelled. The organization for PCV-3 is similar, but the sequence identity is much lower.
Genome
PCV's genome is one of the simplest of all viruses, requiring only a capsid protein (ORF2) and two replicase proteins (ORF1) in order to replicate and produce a functional virus. Due to the simplicity of PCV, it must rely heavily on the host's cellular machinery to replicate. The origin of replication is located on a small octanucleotide stem-loop that is flanked by palindromic repeats, with the ORF's being located head-to-head on both sides of the Ori. Specifically, ORF1 is located clockwise and ORF2 is located counterclockwise of the Ori.
The two replicase enzymes that are created from ORF1, Rep and Rep', are conserved between the two types of PCV, and are part of the early phase of the virus. The replicases differ in that Rep is the full ORF1 transcript of 312 amino acids, whereas Rep' is a truncated form of ORF1 as a result of splicing and is only 168 amino acids in length. The promoter for rep (Prep) contains an Interferon-Stimulated Response Element (ISRE) that suggests Rep and Rep' are regulated by cytokine involvement, and is probably a means for the virus to overcome the host's immune responses to infection. Rep and Rep' form a dimer that binds to two hexameric regions adjacent to the stem-loop, H1 and H2, which is required for replication. When the dimer binds to this region, the replicases cleave the loop region of the stem-loop and remain covalently bound to the H1 and H2 regions of the DNA, which becomes the 5' end of the DNA. The newly formed 3'OH end forms a primer using host RNA polymerase, which is then used by the host's DNA polymerase to begin transcription of the viral DNA via rolling circle replication. After the complementary DNA strand has been created, the stem region of the stem-loop forms a loose, non-hydrogen bonded, quadruplet DNA structure. This loosely associated structure can form short lived DNA-trimers which forms two templates for replication, as well as maintaining the nucleic integrity of the stem region of the stem-loop. The termination of the replication sequence has not been identified, yet, though there is evidence supporting that Rep also represses its own promoter, Prep.
The ORF2 region encodes the capside protein Cap (aka CP), which differs slightly between PCV-1 and PCV-2. This variation within PCV may explain why PCV-1 is non-pathogenic, while PCV-2 is pathogenic. The promoter for this protein is located within ORF1, within the site where Rep' is truncated, and is splice from the same exon to the starting point of the ORF2 coding region and expressed during both early and late phases. This is the immunogenic region of the virus and is the primary area of research for creating a vaccine to treat PMWS.
There is a third gene encoded in the opposite orientation to ORF1 in the genome. This gene is transcribed and is an essential gene involved in viral replication.
Size
Porcine circovirus is a replicating entity with one of the smallest DNA strands consisting of a simple loop of DNA.
The DNA sequence for Porcine circovirus type 2 strain MLP-22 is 1726 base pairs long.
Entry
PCV infects a wide variety of cell types, including hepatocytes, cardiomyocytes, and macrophages. However, until recently, it was unknown exactly how attachment and entry into these cells was achieved. Research has shown that PCV utilizes clathrin-mediated endocytosis to enter the cell, though it's speculated that there may still be other factors that haven't been identified. Once endocytosed, the endosome and lysosome formation causes an acidic pH shift, which allows ATP-driven uncoating of the virus and allows it to escape the endosomes and lysosomes. After the virus escapes the endosomes and lysosomes, it travels to the nucleus through unknown means.
Escape
Besides ORF1 and ORF2, there is also an ORF3 which is not necessarily required for PCV to survive within the host. Research has shown that the protein coded in ORF3 can modulate the host cell's cell-division cycle and cause cell-mediated, virus-induced apoptosis. Using a yeast two-hybrid screening system of ORF3 against the porcine cDNA library indicated that the ORF3 protein interacts with the porcine pPirh2, which is an E3 ubiquitin ligase. This E3 ubiquitin ligase normally interacts with p53 during the cell division cycle and prevents it from halting the cell division cycle at S-phase. However, ORF3 also interacts with pPirh2 at the same region as p53 and causes an upregulation of p53 expression. This increase in p53 stops the cell division cycle and the result of this is p53 mediated apoptosis, which releases PCV into the extracellular environment.
Contamination in human vaccine
On March 22, 2010, the U.S. Food and Drug Administration (FDA) recommended suspending the use of Rotarix, one of two vaccines licensed in the United States against rotavirus, due to findings of viral DNA contamination. Follow-up work by GlaxoSmithKline confirmed the contamination in working cells and the viral "seed" used in Rotarix production, also confirming the material was likely present since the early stages of product development, including the clinical trials for FDA approval.
Testing of the other licensed vaccine against rotavirus infection, RotaTeq, also detected some components of both PCV-1 and PCV-2. Porcine circovirus 1 is not known to cause disease in humans or other animals.
As of June 8, 2010, the FDA has, based on a careful review of a variety of scientific information, determined it is appropriate for clinicians and public health professionals in the United States to use both Rotarix and RotaTeq vaccine.
See also
Animal virology
References
External links
The Control of Porcine Circovirus Diseases (PCVDs): Towards Improved Food Quality and Safety
Animal Disease Diagnostic Laboratory
Porcine Circovirus Type 2
The Economics of PMWS - Porc Quebec Magazine Article
Animal viruses
Stopcircovirus.com
Viralzone: Circovirus
Articles by Quim Segalés on PCV2 - pig333.com
Circoviridae
Animal viral diseases
Unaccepted virus taxa
Swine diseases | Porcine circovirus | Biology | 1,814 |
2,437,377 | https://en.wikipedia.org/wiki/Architectural%20technologist | The architectural technologist, also known as a building technologist, provides technical building design services and is trained in architectural technology, building technical design and construction.
Architectural technologists apply the science of architecture and typically concentrate on the technology of building, design technology and construction. The training of an architectural technologist concentrates on the ever-increasingly complex technical aspects in a building project, but matters of aesthetics, space, light and circulation are also involved within the technical design, leading the professional to assume decisions which are also non-technical. They can or may negotiate the construction project, and manage the process from conception through to completion, typically focusing on the technical aspects of a building project.
Most architectural technologists are employed in architectural and engineering firms, or with municipal authorities; but many provide independent professional services directly to clients, although restricted by law in some countries. Others work in product development or sales with manufacturers.
In Britain, Ireland, Sweden, Denmark, Hong-Kong (Chartered Architectural Technologist), Canada (Architectural Technologist or Registered Building Technologist), Argentina (M.M.O Maestro Mayor de Obras / Chartered Architecture & Building Science Technologist) and other nations, they have many abilities which are extremely useful in a technological sense to work alongside architects, engineers and other professionals - the training of a technologist provides skills in building and architectural technology. It is an important role in the current building climate. Architectural technologists may be directors or shareholders of an architectural firm (where permitted by the jurisdiction and legal structure). To become an Architectural Technologist, a four-year degree (or equivalent) in Architectural Technology (in Canada normally a three year diploma) is required, which can be followed by a Master's Degree, with structured professional and occupational experience.
By country
Canada
Most provinces in Canada have an association representing Architectural Technologists and Technicians.
In the province of Ontario, the Association of Architectural Technologists of Ontario (AATO) was founded in 1969. The Association holds four titles, Architectural Technologist, Registered Building Technologist, Architectural Technician and Registered Building Technician and the French equivalent of each title. The Association recognizes students and has an Internship process for members that incorporates both education and work experience for members. Our membership is involved in all aspects of the construction industry and often form part of the team of professionals on all types of projects.
Republic of Ireland
In the Republic of Ireland, the Royal Institute of the Architects of Ireland RIAI declares being the leading professional body for Architectural Technologists in Ireland. The RIAI recognises the professional Architectural Technologist as a technical designer, skilled in the application and integration of construction technologies in the building design process. RIAI Architectural Technologists are recognised as professional partners to Architects in the delivery of exemplary buildings in the Republic of Ireland and worldwide. However, the Royal Institute of the Architects of Ireland has always prevented its technician members to provide a full architectural service. Many qualified architectural technologists believe that a conflict of interest exists, that the RIAI represents architects and cannot adequately defend the interests of architectural technologists: "The RIAI acts as the Registration Body and Competent Authority for "Architects" in Ireland and only provides support services for Irish AT'".
Another representative body is the Chartered Institute of Architectural Technologists (CIAT). The technologist membership of the RIAI (RIAI tech) is equivalent to the associate membership of CIAT (ACIAT). Chartered members of CIAT (MCIAT) are qualified and recognised to lead a project from inception through to completion. The RIAI and the CIAT were represented within the Building Regulations Advisory Body (BRAB) which advised the Minister for the Environment on matters relating to the Building Regulations. BRAB is no longer active. CIAT is now challenging the Building Control Regulations 2014, which are depriving its members from providing full architectural services in the Republic of Ireland. The Irish Government appears to have no valid reason to prevent CIAT members from practising in the Republic of Ireland. The restrictions imposed on members of the CIAT are viewed as anti-competitive and in breach of European Law for free movement of services. The CIAT is awaiting for an opinion from the European Commission on this issue.
South Africa
In South Africa the profession is by the South African Institute of Architectural Technologists SAIAT. Senior architectural technologists (10 years or more in practice) enjoy the same statute than architects. The South African Institute of Architects (SAIA) explains that: "Architecture can be practiced in one of four categories of registered person, namely professional architect, professional senior architectural technologist, professional technologist or professional draughtsperson. The possibility of progression from one category to the next has been provided for in the Regulations."
United Kingdom
In the United Kingdom, chartered architectural technologists enjoy the same status as architects. They deliver similar services with a different orientation. The Chartered Institute of Architectural Technologists CIAT regulates the profession. CIAT defines chartered architectural technologists as follow: Chartered Architectural Technologists provide architectural design services and solutions. They are specialists in the science of architecture, building design and construction and form the link between concept and construction. They negotiate the construction project and manage the process from conception through to completion. Chartered Architectural Technologists, MCIAT, may practise on their own account or with fellow Chartered Architectural Technologists, architects, engineers, surveyors and other professionals within the construction industry. As professionals adhering to a Code of Conduct, they are required to obtain and maintain mandatory Professional Indemnity Insurance (PII) if providing services directly to clients. They specify products with reference to the RIBA Product Selector, Architects Standard Catalogue, Barbour Index and trade literature.
See also
Architect
Architectural drawing
Architectural engineering
Architectural technology
Building engineer
Building engineering
Building services engineering
Project engineering
Construction manager
Construction engineering
Construction engineer
Drafter
Engineering technician
Engineering technologist
References
External links
South African Institute of Architectural Technologist
Architectural Technology Ireland
Architectural Technology Ontario
Architectural and Building Technologists Association of Manitoba
Architectural design
Architecture occupations
Building engineering
Draughtsmen
Technicians | Architectural technologist | Engineering | 1,212 |
41,038 | https://en.wikipedia.org/wiki/Digital%20subscriber%20line | Digital subscriber line (DSL; originally digital subscriber loop) is a family of technologies that are used to transmit digital data over telephone lines. In telecommunications marketing, the term DSL is widely understood to mean asymmetric digital subscriber line (ADSL), the most commonly installed DSL technology, for Internet access.
In ADSL, the data throughput in the upstream direction (the direction to the service provider) is lower, hence the designation of asymmetric service. In symmetric digital subscriber line (SDSL) services, the downstream and upstream data rates are equal.
DSL service can be delivered simultaneously with wired telephone service on the same telephone line since DSL uses higher frequency bands for data transmission. On the customer premises, a DSL filter is installed on each telephone to prevent undesirable interaction between DSL and telephone service.
The bit rate of consumer ADSL services typically ranges from 256 kbit/s up to 25 Mbit/s, while the later VDSL+ technology delivers between 16 Mbit/s and 250 Mbit/s in the direction to the customer (downstream), with up to 40 Mbit/s upstream. The exact performance is depending on technology, line conditions, and service-level implementation. Researchers at Bell Labs have reached SDSL speeds over 1 Gbit/s using traditional copper telephone lines, though such speeds have not been made available for the end customers yet.
History
Initially, it was believed that ordinary phone lines could only be used at modest speeds, usually less than 9600 bits per second. In the 1950s, ordinary twisted-pair telephone cable often carried 4 MHz television signals between studios, suggesting that such lines would allow transmitting many megabits per second. One such circuit in the United Kingdom ran some between the BBC studios in Newcastle-upon-Tyne and the Pontop Pike transmitting station. However, these cables had other impairments besides Gaussian noise, preventing such rates from becoming practical in the field. The 1980s saw the development of techniques for broadband communications that allowed the limit to be greatly extended. A patent was filed in 1979 for the use of existing telephone wires for both telephones and data terminals that were connected to a remote computer via a digital data carrier system.
The motivation for digital subscriber line technology was the Integrated Services Digital Network (ISDN) specification proposed in 1984 by the CCITT (now ITU-T) as part of Recommendation I.120, later reused as ISDN digital subscriber line (IDSL). Employees at Bellcore (now Telcordia Technologies) developed asymmetric digital subscriber line (ADSL) by placing wide-band digital signals at frequencies above the existing baseband analog voice signal carried on conventional twisted pair cabling between telephone exchanges and customers. A patent was filed by AT&T Bell Labs on the basic DSL concept in 1988.
Joseph W. Lechleider's contribution to DSL was his insight that an asymmetric arrangement offered more than double the bandwidth capacity of symmetric DSL. This allowed Internet service providers to offer efficient service to consumers, who benefited greatly from the ability to download large amounts of data but rarely needed to upload comparable amounts. ADSL supports two modes of transport: fast channel and interleaved channel. Fast channel is preferred for streaming multimedia, where an occasional dropped bit is acceptable, but lags are less so. Interleaved channel works better for file transfers, where the delivered data must be error-free but latency (time delay) incurred by the retransmission of error-containing packets is acceptable.
Consumer-oriented ADSL was designed to operate on existing lines already conditioned for Basic Rate Interface ISDN services. Engineers developed high speed DSL facilities such as high bit rate digital subscriber line (HDSL) and symmetric digital subscriber line (SDSL) to provision traditional Digital Signal 1 (DS1) services over standard copper pair facilities.
Older ADSL standards delivered 8 Mbit/s to the customer over about of unshielded twisted-pair copper wire. Newer variants improved these rates. Distances greater than significantly reduce the bandwidth usable on the wires, thus reducing the data rate. But ADSL loop extenders increase these distances by repeating the signal, allowing the local exchange carrier (LEC) to deliver DSL speeds to any distance.
Until the late 1990s, the cost of digital signal processors for DSL was prohibitive. All types of DSL employ highly complex digital signal processing algorithms to overcome the inherent limitations of the existing twisted pair wires. Due to the advancements of very-large-scale integration (VLSI) technology, the cost of the equipment associated with a DSL deployment lowered significantly. The two main pieces of equipment are a digital subscriber line access multiplexer (DSLAM) at one end and a DSL modem at the other end.
It is possible to set up a DSL connection over an existing cable. Such deployment, even including equipment, is much cheaper than installing a new, high-bandwidth fiber-optic cable over the same route and distance. This is true both for ADSL and SDSL variations. The commercial success of DSL and similar technologies largely reflects the advances made in electronics over the decades that have increased performance and reduced costs even while digging trenches in the ground for new cables (copper or fiber optic) remains expensive.
These advantages made ADSL a better proposition for customers requiring Internet access than metered dial up, while also allowing voice calls to be received at the same time as a data connection. Telephone companies were also under pressure to move to ADSL owing to competition from cable companies, which use DOCSIS cable modem technology to achieve similar speeds. Demand for high bandwidth applications, such as video and file sharing, also contributed to the popularity of ADSL technology. Some of the first field trials for DSL were carried out in 1996.
Early DSL service required a dedicated dry loop, but when the U.S. Federal Communications Commission (FCC) required incumbent local exchange carriers (ILECs) to lease their lines to competing DSL service providers, shared-line DSL became available. Also known as DSL over unbundled network element, this unbundling of services allows a single subscriber to receive two separate services from two separate providers on one cable pair. The DSL service provider's equipment is co-located in the same telephone exchange as that of the ILEC supplying the customer's pre-existing voice service. The subscriber's circuit is rewired to interface with hardware supplied by the ILEC which combines a DSL frequency and POTS signals on a single copper pair.
Since 1999, certain ISPs have been offering microfilters. These devices are installed indoors and serve the same purpose as DSL splitters, which are deployed outdoors: they divide the frequencies needed for ADSL and POTS phone calls. These filters originated out of a desire to make self-installation of DSL service possible, and eliminate early outdoor DSL splitters which were installed at or near the demarcation point between the customer and the ISP.
By 2012, some carriers in the United States reported that DSL remote terminals with fiber backhaul were replacing older ADSL systems.
Operation
Telephones are connected to the telephone exchange via a local loop, which is a physical pair of wires. The local loop was originally intended mostly for the transmission of speech, encompassing an audio frequency range of 300 to 3400 hertz (commercial bandwidth). However, as long-distance trunks were gradually converted from analog to digital operation, the idea of being able to pass data through the local loop (by using frequencies above the voiceband) took hold, ultimately leading to DSL.
The local loop connecting the telephone exchange to most subscribers has the capability of carrying frequencies well beyond the 3400 Hz upper limit of POTS. Depending on the length and quality of the loop, the upper limit can be tens of megahertz. DSL takes advantage of this unused bandwidth of the local loop by creating 4312.5 Hz wide channels starting between 10 and 100 kHz, depending on how the system is configured. Allocation of channels continues to higher frequencies (up to 1.1 MHz for ADSL) until new channels are deemed unusable. Each channel is evaluated for usability in much the same way an analog modem would on a POTS connection. More usable channels equate to more available bandwidth, which is why distance and line quality are a factor (the higher frequencies used by DSL travel only short distances).
The pool of usable channels is then split into two different frequency bands for upstream and downstream traffic, based on a preconfigured ratio. This segregation reduces interference. Once the channel groups have been established, the individual channels are bonded into a pair of virtual circuits, one in each direction. Like analog modems, DSL transceivers constantly monitor the quality of each channel and will add or remove them from service depending on whether they are usable. Once upstream and downstream circuits are established, a subscriber can connect to a service such as an Internet service provider or other network services, like a corporate MPLS network.
The underlying technology of transport across DSL facilities uses modulation of high-frequency carrier waves, an analog signal transmission. A DSL circuit terminates at each end in a modem which modulates patterns of bits into certain high-frequency impulses for transmission to the opposing modem. Signals received from the far-end modem are demodulated to yield a corresponding bit pattern that the modem passes on, in digital form, to its interfaced equipment, such as a computer, router, switch, etc.
Unlike traditional dial-up modems, which modulate bits into signals in the 300–3400 Hz audio baseband, DSL modems modulate frequencies from 4000 Hz to as high as 4 MHz. This frequency band separation enables DSL service and plain old telephone service (POTS) to coexist on the same cables, known as voice-grade cables. On the subscriber's end of the circuit, inline DSL filters are installed on each telephone to pass voice frequencies but filter the high-frequency signals that would otherwise be heard as hiss. Also, nonlinear elements in the phone could otherwise generate audible intermodulation and may impair the operation of the data modem in the absence of these low-pass filters. DSL and RADSL modulations do not use the voice-frequency band so high-pass filters are incorporated in the circuitry of DSL modems filter out voice frequencies.
Because DSL operates above the 3.4 kHz voice limit, it cannot pass through a loading coil, which is an inductive coil that is designed to counteract loss caused by shunt capacitance (capacitance between the two wires of the twisted pair). Loading coils are commonly set at regular intervals in POTS lines. Voice service cannot be maintained past a certain distance without such coils. Therefore, some areas that are within range for DSL service are disqualified from eligibility because of loading coil placement. Because of this, phone companies endeavor to remove loading coils on copper loops that can operate without them. Longer lines that require them can be replaced with fiber to the neighborhood or node (FTTN).
Most residential and small-office DSL implementations reserve low frequencies for POTS, so that (with suitable filters and/or splitters) the existing voice service continues to operate independently of the DSL service. Thus POTS-based communications, including fax machines and dial-up modems, can share the wires with DSL. Only one DSL modem can use the subscriber line at a time. The standard way to let multiple computers share a DSL connection uses a router that establishes a connection between the DSL modem and a local Ethernet, powerline, or Wi-Fi network on the customer's premises.
The theoretical foundations of DSL, like much of communication technology, can be traced back to Claude Shannon's seminal 1948 paper, "A Mathematical Theory of Communication". Generally, higher bit rate transmissions require a wider frequency band, though the ratio of bit rate to symbol rate and thus to bandwidth are not linear due to significant innovations in digital signal processing and digital modulation methods.
Naked DSL
Naked DSL is a way of providing only DSL services over a local loop. It is useful when the customer does not need the traditional telephony voice service because voice service is received either on top of the DSL services (usually VoIP) or through another network (E.g., mobile telephony). It is also commonly called an unbundled network element (UNE) in the United States; in Australia it is known as a unconditioned local loop (ULL); in Belgium it is known as "raw copper" and in the UK it is known as Single Order GEA (SoGEA).
It started making a comeback in the United States in 2004 when Qwest started offering it, closely followed by Speakeasy. As a result of AT&T's merger with SBC, and Verizon's merger with MCI, those telephone companies have an obligation to offer naked DSL to consumers.
Typical setup
On the customer side, a DSL modem is hooked up to a phone line. The telephone company connects the other end of the line to a DSLAM, which concentrates a large number of individual DSL connections into a single box. The DSLAM cannot be located too far from the customer because of attenuation between the DSLAM and the user's DSL modem. It is common for a few residential blocks to be connected to one DSLAM.
The above figure is a schematic of a simple DSL connection (in blue). The right side shows a DSLAM residing in the telephone company's telephone exchange. The left side shows the customer premises equipment with an optional router. The router manages a local area network which connects PCs and other local devices. The customer may opt for a modem that contains both a router and wireless access. This option (within the dashed bubble) often simplifies the connection.
Exchange equipment
At the exchange, a digital subscriber line access multiplexer (DSLAM) terminates the DSL circuits and aggregates them, where they are handed off to other networking transports. The DSLAM terminates all connections and recovers the original digital information. In the case of ADSL, the voice component is also separated at this step, either by a filter or splitter integrated in the DSLAM or by specialized filtering equipment installed before it. Load coils in phone lines, used for extending their range in rural areas, must be removed to allow DSL to operate as they only allow frequencies of up to 4000 Hz to pass through phone cables.
Customer equipment
The customer end of the connection consists of a DSL modem. This converts data between the digital signals used by computers and the analog voltage signal of a suitable frequency range which is then applied to the phone line.
In some DSL variations (for example, HDSL), the modem connects directly to the computer via a serial interface, using protocols such as Ethernet or V.35. In other cases (particularly ADSL), it is common for the customer equipment to be integrated with higher-level functionality, such as routing, firewalling, or other application-specific hardware and software. In this case, the equipment is referred to as a gateway.
Most DSL technologies require the installation of appropriate DSL filters at the customer's premises to separate the DSL signal from the low-frequency voice signal. The separation can take place either at the demarcation point, or with filters installed at the telephone outlets inside the customer premises. It is possible for a DSL gateway to integrate the filter, and allow telephones to connect through the gateway.
Modern DSL gateways often integrate routing and other functionality. The system boots, synchronizes the DSL connection and finally establishes the internet IP services and connection between the local network and the service provider, using protocols such as DHCP or PPPoE.
Protocols and configurations
Many DSL technologies implement an Asynchronous Transfer Mode (ATM) layer over the low-level bitstream layer to enable the adaptation of a number of different technologies over the same link.
DSL implementations may create bridged or routed networks. In a bridged configuration, the group of subscriber computers effectively connect into a single subnetwork. The earliest implementations used DHCP to provide the IP address to the subscriber equipment, with authentication via MAC address or an assigned hostname. Later implementations often use Point-to-Point Protocol (PPP) to authenticate with a user ID and password.
Transmission modulation methods
Transmission methods vary by market, region, carrier, and equipment.
Discrete multitone modulation (DMT), the most common kind, also known as Orthogonal frequency-division multiplexing (OFDM)
Trellis-coded pulse-amplitude modulation (TC-PAM), used for HDSL2 and SHDSL
Carrierless amplitude phase modulation (CAP), deprecated in 1996 for ADSL, used for HDSL
Two-binary, one-quaternary (2B1Q), used for IDSL and HDSL
DSL technologies
DSL technologies (sometimes collectively summarized as xDSL) include:
Symmetric digital subscriber line (SDSL), umbrella term for xDSL where the bitrate is equal in both directions.
ISDN digital subscriber line (IDSL), ISDN-based technology that provides a bitrate equivalent to two ISDN bearer and one data channel, 144 kbit/s symmetric over one pair
High-bit-rate digital subscriber line (HDSL), ITU-T G.991.1, the first DSL technology that used a higher frequency spectrum than ISDN, 1,544 kbit/s and 2,048 kbit/s symmetric services, either on 2 or 3 pairs at 784 kbit/s each, 2 pairs at 1,168 kbit/s each, or one pair at 2,320 kbit/s
High-bit-rate digital subscriber line 2/4 (HDSL2, HDSL4), ANSI, 1,544 kbit/s symmetric over one pair (HDSL2) or two pairs (HDSL4)
Symmetric digital subscriber line (SDSL), specific proprietary technology, up to 1,544 kbit/s symmetric over one pair
Single-pair high-speed digital subscriber line (G.SHDSL), ITU-T G.991.2, standardized successor of HDSL and proprietary SDSL, up to 5,696 kbit/s per pair, up to four pairs
Asymmetric digital subscriber line (ADSL), umbrella term for xDSL where the bitrate is greater in one direction than the other.
ANSI T1.413 Issue 2, up to 8 Mbit/s and 1 Mbit/s
G.dmt, ITU-T G.992.1, up to 10 Mbit/s and 1 Mbit/s
G.lite, ITU-T G.992.2, more noise and attenuation resistant than G.dmt, up to 1,536 kbit/s and 512 kbit/s
Asymmetric digital subscriber line 2 (ADSL2), ITU-T G.992.3, up to 12 Mbit/s and 3.5 Mbit/s
Asymmetric digital subscriber line 2 plus (ADSL2+), ITU-T G.992.5, up to 24 Mbit/s and 3.5 Mbit/s
Very-high-bit-rate digital subscriber line (VDSL), ITU-T G.993.1, up to 52 Mbit/s and 16 Mbit/s
Very-high-bit-rate digital subscriber line 2 (VDSL2), ITU-T G.993.2, an improved version of VDSL, compatible with ADSL2+, sum of both directions up to 200 Mbit/s. G.vector crosstalk cancelling feature (ITU-T G.993.5) can be used to increase range at a given bitrate, e.g. 100 Mbit/s at up to 500 meters.
G.fast, ITU-T G.9700 and G.9701, up to approximately 1 Gbit/s aggregate uplink and downlink at 100m. Approved in December 2014, deployments planned for 2016.
XG-FAST, allows for up to 10 Gbps on copper twisted pair lines, but only for lengths up to 30 meters. Real-world tests have shown 8 Gbps on 30-meter long twisted pair lines.
Bonded DSL Rings (DSL Rings), a shared ring topology at 400 Mbit/s
Cable/DSL gateway
Etherloop Ethernet local loop
High-speed voice and data link
Rate-Adaptive Digital Subscriber Line (RADSL), designed to increase range and noise tolerance by sacrificing upstream speed
Uni-DSL (Uni digital subscriber line or UDSL), technology developed by Texas Instruments, backwards compatible with all DMT standards
Hybrid Access Networks combine existing xDSL deployments with a wireless network such as LTE to increase bandwidth and quality of experience by balancing the traffic over the two access networks.
The line-length limitations from telephone exchange to subscriber impose severe limits on data transmission rates. Technologies such as VDSL provide very high-speed but short-range links. VDSL is used as a method of delivering triple play services (typically implemented in fiber to the curb network architectures).
Terabit DSL, is a technology that proposes the use of the space between the dielectrics (insulators) on copper twisted pair lines in telephone cables, as waveguides for 300 GHz signals that can offer speeds of up to 1 terabit per second at distances of up to 100 meters, 100 gigabits per second for 300 meters, and 10 gigabits per second for 500 meters. The first experiment for this was carried out with copper lines that were parallel to each other, and not twisted, inside a metal pipe meant to simulate the metal armoring in large telephone cables.
See also
Dynamic spectrum management (DSM)
John Cioffi – Known as "the father of DSL"
List of countries by number of Internet users
List of interface bit rates
References
Further reading
pp 53–86
External links
ADSL Theory—Information about the background & workings of ADSL, and the factors involved in achieving a good sync between your modem and the DSLAM.
American inventions
Modems
Internet access | Digital subscriber line | Technology | 4,744 |
51,298,870 | https://en.wikipedia.org/wiki/Bolandiol%20dipropionate | Bolandiol dipropionate (USAN; brand names Anabiol, Storinal; former development code SC-7525; also known as bolandiol propionate (JAN), norpropandrolate, 19-nor-4-androstenediol dipropionate, or estr-4-ene-3β,17β-diol 3,17-dipropionate) is a synthetic anabolic-androgenic steroid (AAS) and derivative of 19-nortestosterone (nandrolone). It is an androgen ester – specifically, the 3,17-dipropionate ester of bolandiol (19-nor-4-androstenediol).
See also
Androstenediol dipropionate
Testosterone acetate butyrate
Testosterone acetate propionate
Testosterone diacetate
Testosterone dipropionate
Methandriol bisenanthoyl acetate
Methandriol diacetate
Methandriol dipropionate
References
Androgen esters
Anabolic–androgenic steroids
Propionate esters
Estranes
Estrogens
Prodrugs
Progestogens | Bolandiol dipropionate | Chemistry | 251 |
1,371,297 | https://en.wikipedia.org/wiki/Amazon%20China | Amazon China (Chinese: 亚马逊中国), formerly known as Joyo.com (Chinese: 卓越网), is an online shopping website. Joyo.com was founded in early 2000 by the Chinese entrepreneur Lei Jun in Beijing, China. The company primarily sold books and other media goods, shipping to customers nationwide. Joyo.com was renamed to “Amazon China” when sold to Amazon Inc in 2004 for US$75 Million. Amazon China closed its domestic business in China in June 2019, offering only products from sellers located overseas.
History
Original
Joyo.com was founded by Lei Jun, Chinese entrepreneur and owner of Kingsoft, in May 2000. Kingsoft decided to make Joyo.com into an online bookstore in 1999. Joyo.com was originally a site offering programs for downloading onto desktop computers. It later became an online bookstore and was the second online retailer of books in China after DangDang. They continued to expand their inventory and was in 2003 considered to be one of the largest online retailers of media goods worldwide. Lenovo Group Ltd and Kingsoft Corp were shareholders of Joyo.com until Tiger Technology Management LLC entered the corporation by investing $52 million for about 20% of Joyo, making it the third largest stakeholder. Joyo.com primarily merchandised goods such as books, music and videos to consumer nationwide. In 2004 Joyo.com was acquired by the American multinational technology company Amazon.com Inc. The company was rebranded and renamed in October 2011 to Amazon China. Amazon made many changes to Joyo, like redesigning the website by adding categories and self-service features, as well as implementing new payment methods. Amazon also changed the webpage URL to http://z.cn, but continued to be hosted at www.amazon.cn. Amazon China sold foreign fashion brands, home interior, toys, personal care and technology to their Chinese customers. Amazon was planning on establishing operations in Shanghai's free trade zone.
Acquisition
Amazon.com Inc. acquired Joyo.com in August 2004 and was bought for approximately US $75 million, where about $72 million was paid in cash and $3 million in stock options. The acquisition deal also included control over other Chinese subsidiaries and partners owned by Joyo.com. Amazon.com Inc. was already present and operating in the US, Canada, France, Germany, Japan and the UK before they entered the Chinese market. At the time, Joyos headquarters were located in the British Virgin Islands. Amazon.com Inc.’s CEO, Jeff Bezos, expressed excitement over the acquisition, but recognized the challenges of entering the Chinese market. Though being fully owned by Amazon, Joyo.com kept its name and branding until 2007, the name Joyo Amazon replaced Joyo and the website Amazon.cn was launched. Joyo continued to operate partly independently from other sites owned by Amazon for a couple of years. It expanded the variety of products being sold, including technology, cosmetics and baby products.
Sales revenues
Joyo.com was one of the largest online retailers before Amazon acquired the company and grew rapidly after it was launched. The sales revenues reached 56 million yuan in 2001 and 150 million yuan in 2003. According to the at the time president of Joyo.com, Lin Shuixin, Joyo generated in the second quarter of 2003 a 70% growth in revenues.
After the acquisition, Amazon China struggled to continue the growth and compete with local competitors. Due to changes in operation, salaries, customer service and technology by instructions from Amazon, Joyo lost US$13 after the acquisition according to ChinaByte. The New York Times wrote that Amazon China sold less in China than in Japan which was Amazons smallest market at the time. They were only responsible for about 6% of Amazons business in total and therefore not large enough to be included in Amazons annual reports. In 2013, Amazon China reported fiscal sales of US$74,4 billion, while their competitor, Alibaba had sales for approximately US$420 billion in 2014.
E-commerce in China
E-commerce is very popular in China and is the home of the world's largest market for e-commerce, responsible for approximately 50% of online purchases. China is the world's largest digital community with approximately 830 million internet users per 2018. China is among the world's largest exporters of commercial goods, and sold about five times as much as Japan in 2016. The US is especially important for Chinese manufacturers as the number of China-based sellers is growing on Amazon. Today, Alibaba-owned companies TaoBao and Tmall together make up approximately 55% of the Chinese e-commerce market and JD.com about 25%.
Seller feedback is highly valued on many Chinese e-commerce sites, and many offer rebates or gifts in exchange for comments or ratings. The concept is named "Rebate For Feedback" (RFF) and was founded by economist Li Lingfang. Taobao launched a system where buyers were able to receive ”refund points” that could be collected and used as a coupon with another purchase.
Counterfeit products
Counterfeit products are common in China, as the country is responsible for about 63,2% of the world's production of fake goods. Hong Kong is the world's second largest producer. Amazon has been admitted in 2019 to have a problem with vendors selling counterfeit products, and that the majority of these were located in mainland China. Amazon is legally not responsible for the counterfeit products sold through their website as the products are sold through third parties, and has been critiqued for lack of action to fight the issue. In 2018 two programs named the Transparency program and Project Zero, implemented by Amazon, aimed to eliminate counterfeit products on their site.
Singles Day Shopping Festival
In 2009, Alibaba and JD.com released a Global Shopping Festival held on the 11th of November, also known as Singles Day in China. Singles Day was supposedly started in the 1990s by a group of university students in China who bought themselves gifts to oppose Valentines Day. November 11 was later popularized and promoted by e-commerce retailers like Alibaba as a day to celebrate the lonely ones by buying gifts for themselves. On this day, retailers offer products like phones, clothing and health care packages to a highly discounted price. Singles Day has become an important day for Chinese e-commerce, and is considered to be about four times bigger than Black Friday in the US. Alibaba sold around 200,000 different brands and generated about US$38 billion in sales on Singles Day in 2019, despite China being in the middle of a trade war with the US. Singles Day is celebrated by many online retailers, but is primarily associated with Alibaba. Despite generating huge sales, Singles Day has been heavily critiqued by environmental activists for contributing to large greenhouse gas emissions.
Competition
In the early 2000s, DangDang was considered the largest competitor to Joyo.com. They were especially similar, as they both focused on selling books as their main business and offered products large discounts on the same products. There was an increase in number of Chinese e-commerce business in the early 2000. These companies grew rapidly along with the rising popularity for online shopping, especially among younger consumers. The competition for consumers rose and Alibaba became the largest and most popular in China after a few years in the market.
Amazon.cn offered express delivery for the major Chinese cities, like Beijing, Shanghai and Guangzhou, but their competitors usually offered faster delivery to a cheaper price or free, regardless of city. Amazon.cn also had a minimum spend amount for their Chinese customers for all purchases on their website, which varied from 59 yuan to 200 yuan based on item. Amazon Inc. minimum spend policy also applies in other countries, but come with free national express delivery as well.
After the acquisition in 2004, were Amazons biggest competitors in China, Dangdang, Taobao, Pinduoduo, Alibaba, Tmall and JD.com. They are similar in terms of business model and variety of products being sold. In comparison to Alibaba and DangDang, did Amazon not have the same intense marketing approach as their competitors, that often had big sales promotions and campaigns during Chinese holidays.
Alibaba
Chinese e-commerce site Alibaba is today considered to be the largest e-commerce retailer in China based on yearly revenue sales. Alibaba has approximately 654 million users and has had a steeper growing rate than Amazon Alibaba was also established in 1999, by founder Jack Ma with a team of 17 friends in Hangzhou, China. Due to their similarities, is Alibaba often referred to as the "Chinese Amazon" despite being two separate companies with little connection to each other. Alibaba and Amazon are similar in products sold and popularity. The largest differences deal with their business model as Amazon sells own products, while Alibaba operates a platform that connects sellers and customers and do not own any inventory themselves. Alibaba has several subsidiaries and operates within numerous markets. They primarily focus on e-commerce, but also operate within the technological market. Alibaba has its own payment platform named Alipay, which is used both online and in stores worldwide. They have also developed a messaging app, Dingtalk, which is recognised as the world's largest professional communication app.
Termination
Amazon.com Inc. announced in 2019 that it would close down their business in China by the 18 July 2019 to focus on cross-border selling to Chinese consumers. Amazon China faced tough competition as the rivals like Alibaba started to gain more popularity. They struggled for many years to gain traction and eventually stopped growing. According to Ker Zheng, a marketing specialist at Azoya, they had little competitive advantage in China compared to the other countries they were operating in. According to iResearch China, Amazon's market share was less than 1% when they decided to shut down as Amazon China. Amazon continues to offer limited services in China, like Amazon Prime, but without the on-demand video benefits. Customers can still enter the webpage amazon.cn, but can only access products imported from Amazon sites located overseas. This includes the US, UK, Germany or Japan. Vendors located in China however are still able to sell their products to consumers overseas and there are estimated about 200,000 Chinese sellers active on Amazon, selling to overseas buyers, especially in the US.
On 2 June 2022, Amazon announced that it would stop the operation of Kindle e-bookstore in China on 30 June 2023.
References
External links
Official website
Amazon (company)
Online retailers of China
Privately held companies of China
2004 mergers and acquisitions
E-commerce
E-commerce in China
Alibaba Group
JD.com | Amazon China | Technology | 2,204 |
67,944,539 | https://en.wikipedia.org/wiki/Groundwater%20contamination%20by%20pharmaceuticals | Groundwater contamination by pharmaceuticals, which belong to the category of contaminants of emerging concern (CEC) or emerging organic pollutants (EOP), has been receiving increasing attention in the fields of environmental engineering, hydrology and hydrogeochemistry since the last decades of the twentieth century.
Pharmaceuticals are suspected to provoke long-term effects in aquatic ecosystems even at low concentration ranges (trace concentrations) because of their bioactive and chemically stable nature, which leads to recalcitrant behaviours in the aqueous compartments, a feature that is typically associated with the difficulty in degrading these compounds to innocuous molecules, similarly with the behaviour exhibited by persistent organic pollutants. Furthermore, continuous release of medical products in the water cycle poses concerns about bioaccumulation and biomagnification phenomena. As the vulnerability of groundwater systems is increasingly recognized even from the regulating authority (the European Medicines Agency, EMA), environmental risk assessment (ERA) procedures, which is required for pharmaceuticals appliance for marketing authorization and preventive actions urged to preserve these environments.
In the last decades of the twentieth century, scientific research efforts have been fostered towards deeper understanding of the interactions of groundwater transport and attenuation mechanisms with the chemical nature of polluting agents. Amongst the multiple mechanisms governing solutes mobility in groundwater, biotransformation and biodegradation play a crucial role in determining the evolution of the system (as identified by developing concentration fields) in the presence of organic compounds, such as pharmaceuticals. Other processes that might impact on pharmaceuticals fate in groundwater include classical advective-dispersive mass transfer, as well as geochemical reactions, such as adsorption onto soils and dissolution / precipitation.
One major goal in the field of environmental protection and risk mitigation is the development of mathematical formulations yielding reliable predictions of the fate of pharmaceuticals in aquifer systems, eventually followed by an appropriate quantification of predictive uncertainty and estimation of the risks associated with this kind of contamination.
General problem
Pharmaceuticals represent a serious threat to aquifer systems because of their bioactive nature, which makes them capable of interacting directly with therein residing living microorganisms and yielding bioaccumulation and biomagnification phenomena. Occurrence of xenobiotics in groundwater has been proven to harm the delicate equilibria of aquatic ecosystems in several ways, such as promoting the growth of antibiotic-resistant bacteria or causing hormones-related sexual disruption in living organisms in surface waters. Considering then the role of groundwater systems as main worldwide drinking water resources, the capability of pharmaceuticals to interact with human tissues poses serious concerns also in terms of human health. Indeed, the majority of pharmaceuticals do not degrade in groundwater, where get accumulated due to their continuous release in the environment. Then, these compounds reach subsurface systems through different sources, such as hospital effluents, wastewaters and landfill leachates, which clearly risk contaminating drinking water.
Most detected pharmaceutical classes
The main pharmaceutical classes detected in worldwide groundwater systems are listed below. The following categorisation is based on a medical perspective and it is often referred to as therapeutic classification.
Antibiotics
Estrogens and hormones
Anti-inflammatories and analgesics
Antiepileptics
Lipid regulators
Antihypertensives
Contrast media
Antidepressants
Antiulcer drugs and Antihistamines
Chemical aspects relevant to aquifer systems dynamics
The chemical structure of pharmaceuticals affects the type of hydro-geochemical processes that mainly impacts on their fate in groundwater and it is strictly associated with their chemical properties. Therefore, a classification of pharmaceuticals based on chemical classes is a valid alternative to the purpose of understanding the role of molecular structures in determining the kind of physical and geochemical processes affecting their mobility in porous media.
With regard to the occurrence of medical drugs in subsurface aquatic systems, the following chemical properties are of major interest:
Solubility in the aqueous phase
Pharmaceuticals solubility in water affects the mobility of these compounds within aquifers. This feature depends on pharmaceuticals polarity, as polar substances are typically hydrophilic, thereby showing marked tendency to dissolve in the aqueous phase, where they become solutes. This aspect impacts on dissolution / precipitation equilibrium, a phenomenon that is mathematically described in terms of the substance solubility product (addressed in many books with the notation ).
Lipophilicity, often measured through the so-called octanol-water partition coefficient (typically addressed as )
Large values outline the non polar character of the chemical species, which shows instead particular affinity to dissolve into organic solvents. Therefore, lipophilic pharmaceuticals are markedly subjected to the risk to bioaccumulate and biomagnificate in the environment, consistent with their preferential partition with the organic tissues of living organisms. Sufficiently large pharmaceuticals are in fact subjected to specific tiers in the environmental risk assessment (ERA) procedure (to be supplied for the marketing authorisation application) and are highlighted as potential sources of bioaccumulation and biomagnification according to the EMA guidelines. Lipophilic compounds are then insoluble in water, where they persist as a separated phase from the aqueous one. This renders their mobility in groundwater basically decoupled with dissolution / precipitation mechanisms and attributed to the mean flow transport (advection and dispersion) and soil-mediated mechanisms of reaction (adsorption).
Affinity of sorption onto the soils
This feature is expressed in terms of the so-called organic carbon-water partition coefficient, that is usually referred to as and is an intrinsic property of the molecule.
Acidic character
Molecules behaviour in relation to aqueous dissociation reactions is typically related to their acid dissociation constants, that are typically outlined in terms of their coefficients.
Affinity to redox reactions, even in the context of bacterially-mediated metabolic pathways
The molecular structure of xenobiotics typically outlines the existence of several possible reaction pathways, which are embedded in complex reaction networks and are typically referred to as transformation processes. With reference to organic compounds, such as pharmaceuticals, innumerable kinds of chemical reactions exist, most of them involving common chemical mechanisms, such as functional groups elimination, addition and substitution. These processes often involve further redox reactions accomplished on the substrates, which are here represented by pharmaceutical solutes and, eventually, their transformation products and metabolites. These processes can be then classified as either biotic or abiotic, depending on the presence or absence of bacterial communities acting as reaction mediators. In the former case, these transformation pathways are typically addressed as biodegradation or biotransformation in the hydrogeochemical literature, depending on the extent of cleavage of the parent molecule into highly oxidized, innocuous species.
Transport and attenuation processes
The fate of pharmaceuticals in groundwater is governed by different processes. The reference theoretical framework is that of reactive solute transport in porous media at the continuum scale, that is typically interpreted through the advective-dispersive-reactive equation (ADRE). With reference to the saturated region of the aquifer, the ADRE is written as:
Where represents the effective porosity of the medium, and represent - respectively - the spatial coordinates vector and the time coordinate. represents the divergence operator, except for when it applies to , where the nabla symbol stands for gradient of . The term denotes then the pharmaceutical solute concentration field in the water phase (for unsaturated regions of the aquifer, the ADRE equation has a similar shape, but it includes additional terms accounting for volumetric contents and contaminants concentrations in other phases than water), while represents the velocity field. is the hydrodynamic dispersion tensor and is typically function of the sole variable . Lastly, the storage term includes the accumulation or removal contribution due to all possible reactive processes in the system, i.e., adsorption, dissolution / precipitation, acid dissociation and other transformation reactions, such as biodegradation.
The main hydrological transport processes driving pharmaceuticals and organic contaminants migration in aquifer systems are:
Advection
Hydrodynamic dispersion
The most influential geochemical processes, also referred to as reactive processes and whose effect is embedded in the term of the ADRE, include:
Adsorption onto soil
Dissolution and precipitation
Acid dissociation and aqueous complexation
Biodegradation, biotransformation and other transformation pathways
Advection
Advective transport accounts for the contribution of solute mass transfer across the system that originates from bulk flow motion. At the continuum scale of analysis, the system is interpreted as a continuous medium rather than a collection of solid particles (grains) and empty spaces (pores) through which the fluid can flow. In this context, an average flow velocity can be typically estimated, which arises upscaling the pore scale velocities. Here, the fluid flow conditions ensure the validity of the Darcy's law, which governs the system evolution in terms of average fluid velocity, typically referred to as seepage or advective velocity. Dissolved pharmaceuticals in groundwater are transferred within the domain along with the mean fluid flow and in agreement with the physical principles governing any other solute migration across the system.
Hydrodynamic dispersion
Hydrodynamic dispersion identifies a process that arises as summation of two separate effects. First, it is associated with molecular diffusion, a phenomenon that is appreciated at the macroscale as consequence of microscale Brownian motions. Secondly, it includes a contribution (called mechanical dispersion) arising as an effect of upscaling the fluid-dynamic transport problem from the pore to the continuum scale of investigation, due to the upscaling of local dishomogeneous velocities. The latter contribution is therefore not related to the occurrence of any physical process at the pore scale, but it is only a fictitious consequence of the modelling scale choice. Hydrodynamic dispersion is then embedded in the advective-dispersive-reactive equation (ADRE) assuming a Fickian closure model. Dispersion is felt at the macroscale as responsible of a spread effect of the contaminant plume around its center of mass.
Adsorption onto soil
Sorption identifies a heterogeneous reaction that is often driven by instantaneous thermochemical equilibrium. It describes the process for which a certain mass of solute dissolved in the aqueous phase adheres to a solid phase (such as the organic fraction of soil in the case of organic compounds), being therefore removed from the liquid phase. In hydrogeochemistry, this phenomenon has been proved to cause a delayed effect in solute mobility with respect to the case in which solely advection and dispersion occur in the aquifer. For pharmaceuticals, it can be typically interpreted using a linear adsorption model at equilibrium, which is fully applicable at low concentrations ranges. The latter model relies upon assessment of a linear partition coefficient, usually denoted as , that depends - for organic compounds - on both organic carbon-water partition coefficient and organic carbon fraction into soil. While the former term is an intrinsic chemical property of the molecule, the latter one instead depends on the soil moisture of the analyzed aquifer.
Sorption of trace elements like pharmaceuticals in groundwater is interpreted through the following linear isotherm model:
Where identifies the adsorbed concentration on the solid phase and .
The neutral form of the organic molecules dissolved in water is typically the sole responsible of sorptive mechanisms, that become as more important as the soils are rich in terms of organic carbon. Anionic forms are instead insensitive to sorptive mechanisms, while cations can undergo adsorption only in very particular conditions.
Dissolution and precipitation
Dissolution represents the heterogeneous reaction during which a solid compound, such as an organic salt in the case of pharmaceuticals, gets dissolved into the aqueous phase. Here, the original salt appears in the form of both aqueous cations and anions, depending on the stoichiometry of the dissolution reaction. Precipitation represents the reverse reaction. This process is typically accomplished at thermochemical equilibrium, but in some applications of hydrogeochemical modelling it might be required to consider its kinetics. As an example for the case of pharmaceuticals, the non-steroidal anti-inflammatory drug diclofenac, which is commercialised as sodium diclofenac, undergoes this process in groundwater environments.
Acid dissociation and aqueous complexation
Acid dissociation is a homogeneous reaction that yields dissociation of a dissolved acid (in the water phase) into cationic and anionic forms, while aqueous complexation denotes its reverse process. The aqueous speciation of a solution is determined on the basis of the coefficient, that typically ranges between 3 and 50 (approximately) for organic compounds, such as pharmaceuticals. Being the latter ones weak acids and considering that this process is always accomplished upon instantaneous achievement of thermochemical equilibrium conditions, it is then reasonable to assume that the undissociated form of the original contaminant is predominant in the water speciation for most practical cases in the field of hydrogeochemistry.
Biodegradation, biotransformation and other transformation pathways
Pharmaceuticals can undergo biotransformation or transformation processes in groundwater systems.
Aquifers are indeed rich reserves in terms of minerals and other dissolved chemical species, such as organic matter, dissolved oxygen, nitrates, ferrous and manganese compounds, sulfates, etc., as well as dissolved cations, such as calcium, magnesium and sodium ones. All of these compounds interact through complex reaction networks embedding reactive processes of different nature, such as carbonates precipitation / dissolution, acid–base reactions, sorption and redox reactions. With reference to the latter kind of processes, several pathways are typically possible in aquifers because the environment is often rich in both reducing (like organic matter) and oxidizing agents (like dissolved oxygen, nitrates, ferrous and Manganese oxides, sulfates etc.). Pharmaceuticals can act as substrates as well in this scenario, i.e., they can represent either the reducing, or the oxidizing agent in the context of redox processes. In fact, most chemical reactions involving organic molecules are typically accomplished upon gain or loss of electrons, so that the oxidation state of the molecule changes along the reactive pathway. In this context, the aquifer acts as a "chemical reactor".
There are innumerable kinds of chemical reactions that pharmaceuticals can undergo in this environment, which depend on the availability of other reactants, pH and other environmental conditions, but all of these processes typically share common mechanisms. The main ones involve addition, elimination or substitution of functional groups. The mechanism of reaction is important in the field of hydrogeochemical modeling of aquifer systems because all of these reactions are typically governed by kinetic laws. Therefore, recognizing the correct molecular mechanisms through which a chemical reaction progresses is fundamental to the purpose of modelling the reaction rates correctly (for example, it is often possible to identify a rate limiting step within multistep reactions and relate the rate of reaction progress to that particular step). Modelling these reactions typically follows the classic kinetic laws, except for the case in which reactions involving the contaminant are accomplished in the context of bacterial metabolism. While in the former case the ensemble of reactions is addressed as transformation pathway, in the latter one the terms biodegradation or biotransformation are used, depending on the extent to which the chemical reactions effectively degrade the original organic molecule to innocuous compounds in their maximum oxidation state (i.e., carbon dioxide, methane and water). In case of biologically mediated pathways of reaction, which are relevant in the study of groundwater contamination by pharmaceuticals, there are appropriate kinetic laws that can be employed to model these processes in hydrogeochemical contexts. For example, the Monod and Michaelis-Menten equations are suitable options in case of biotic transformation processes involving organic compounds (such as pharmaceuticals) as substrates.
Despite most hydrogeochemical literature addresses these processes through linear biodegradation models, several studies have been carried out since the second decade of the twenty-first century, as the former ones are typically too simplified to ensure reliable predictions of pharmaceuticals fate in groundwater and might bias risk estimates in the context of risk mitigation applications for the environment.
Hydrologic and geochemical modelling approaches
Groundwater contamination by pharmaceuticals is a topic of great interest in the field of the environmental and hydraulic engineering, where most research efforts have been fostered towards studies on this kind of contaminants since the beginning of the twenty-first century. The general goal of those disciplines is that of developing interpretive models capable to predict the behaviour of aquifer systems in relation to the occurrence of various types of contaminants, among which are included also medical drugs. Such goal is motivated by the necessity to provide mathematical tools to predict, for example, how contaminants concentration fields develop across the aquifer along time. This may provide useful information to support decision-making processes in the context of environmental risk assessment procedures. To this purpose, several interdisciplinary strategies and tools are typically employed, the most fundamental ones being listed below:
Numerical modelling strategies are employed to simulate hydrogeochemical transport models. Some examples of commonly used softwares are MODFLOW and PHREEQC, but there are plenty of available software that can be used.
Statistical inference tools are used to calibrate available hydrogeochemical models against raw data. A widely employed software is, for example, PEST.
Knowledge in organic chemistry stands as fundamental prerequisite to develop geochemical models to be fit against data.
Laboratory or field scale experiments are designed to obtain raw data, which are necessary to study the behaviour of aquifer systems under exposure to compounds of concern.
All of these interdisciplinary tools and strategies are contemporarily employed to analyse the fate of pharmaceuticals in groundwater.
See also
Groundwater pollution
Environmental impact of pharmaceuticals and personal care products
Reactive transport modeling in porous media
Computer simulation
References
Natural resources
Aquifers
Environmental science
Water chemistry
Water pollution
Environmental issues with water
Drug manufacturing | Groundwater contamination by pharmaceuticals | Chemistry,Environmental_science | 3,780 |
478,933 | https://en.wikipedia.org/wiki/Energy%20conservation | Energy conservation is the effort to reduce wasteful energy consumption by using fewer energy services. This can be done by using energy more effectively (using less and better sources of energy for continuous service) or changing one's behavior to use less and better source of service (for example, by driving vehicles which consume renewable energy or energy with more efficiency). Energy conservation can be achieved through efficient energy use, which has some advantages, including a reduction in greenhouse gas emissions and a smaller carbon footprint, as well as cost, water, and energy savings.
Green engineering practices improve the life cycle of the components of machines which convert energy from one form into another.
Energy can be conserved by reducing waste and losses, improving efficiency through technological upgrades, improving operations and maintenance, changing users' behaviors through user profiling or user activities, monitoring appliances, shifting load to off-peak hours, and providing energy-saving recommendations. Observing appliance usage, establishing an energy usage profile, and revealing energy consumption patterns in circumstances where energy is used poorly, can pinpoint user habits and behaviors in energy consumption. Appliance energy profiling helps identify inefficient appliances with high energy consumption and energy load. Seasonal variations also greatly influence energy load, as more air-conditioning is used in warmer seasons and heating in colder seasons. Achieving a balance between energy load and user comfort is complex yet essential for energy preservation. On a large scale, a few factors affect energy consumption trends, including political issues, technological developments, economic growth, and environmental concerns.
User-oriented energy conservation
User behavior has a significant effect on energy conservation. It involves user activity detection, profiling, and appliance interaction behaviors. User profiling consists of the identification of energy usage patterns of the user and replacing required system settings with automated settings that can be initiated on request. Within user profiling, personal characteristics are instrumental in affecting energy conservation behavior. These characteristics include household income, education, gender, age, and social norms.
User behavior also relies on the impact of personality traits, social norms, and attitudes on energy conservation behavior. Beliefs and attitudes toward a convenient lifestyle, environmentally friendly transport, energy security, and residential location choices affect energy conservation behavior. As a result, energy conservation can be made possible by adopting pro-environmental behavior and energy-efficient systems. Education on approaches to energy conservation can result in wise energy use. The choices made by the users yield energy usage patterns. Rigorous analysis of these usage patterns identifies waste energy patterns, and improving those patterns may reduce significant energy load. Therefore, human behavior is critical to determining the implications of energy conservation measures and solving environmental problems. Substantial energy conservation may be achieved if users' habit loops are modified.
User habits
User habits significantly impact energy demand; thus, providing recommendations for improving user habits contributes to energy conservation. Micro-moments are essential in realizing energy consumption patterns and are identified using a variety of sensing units positioned in prominent areas across the home. The micro-moment is an event that changes the state of the appliance from inactive to active and helps in building users' energy consumption profiles according to their activities. Energy conservation can be achieved through user habits by following energy-saving recommendations at micro-moments. Unnecessary energy usage can be decreased by selecting a suitable schedule for appliance operation. Creating an effective scheduling system requires an understanding of user habits regarding appliances.
Off-peak scheduling
Many techniques for energy conservation comprise off-peak scheduling, which means operating an appliance in a low-price energy hour. This schedule can be achieved after user habits regarding appliance use are understood. Most energy providers divide the energy tariff into high and low-price hours; therefore, scheduling an appliance to work an off-peak hour will significantly reduce electricity bills.
User activity detection
User activity detection leads to the precise detection of appliances required for an activity. If an appliance is active but not required for a user's current activity, it wastes energy and can be turned off to conserve energy. The precise identification of user activities is necessary to achieve this method of energy conservation.
Energy conservation opportunities by sector
Buildings
Existing buildings
Energy conservation measures have primarily focused on technological innovations to improve efficiencies and financial incentives with theoretical explanations obtained from the mentioned analytical traditions. Existing buildings can improve energy efficiency by changing structural maintenance materials, adjusting the composition of air conditioning systems, selecting energy-saving equipment, and formulating subsidy policies. These measures can improve users' thermal comfort and reduce buildings' environmental impact. The selection of combinatorial optimization schemes that contain measures to guide and restrict users' behavior in addition to carrying out demand-side management can dynamically adjust energy consumption. At the same time, economic means should enable users to change their behavior and achieve a low-carbon life. Combination optimization and pricing incentives reduce building energy consumption and carbon emissions and reduce users' costs.
Energy monitoring through energy audits can achieve energy efficiency in existing buildings. An energy audit is an inspection and analysis of energy use and flows for energy conservation in a structure, process, or system intending to reduce energy input without negatively affecting output. Energy audits can determine specific opportunities for energy conservation and efficiency measures as well as determine cost-effective strategies. Training professionals typically accomplish this and can be part of some national programs discussed above. The recent development of smartphone apps enables homeowners to complete relatively sophisticated energy audits themselves. For instance, smart thermostats can connect to standard HVAC systems to maintain energy-efficient indoor temperatures. In addition, data loggers can also be installed to monitor the interior temperature and humidity levels to provide a more precise understanding of the conditions. If the data gathered is compared with the users' perceptions of comfort, more fine-tuning of the interiors can be implemented (e.g., increasing the temperature where A.C. is used to prevent over-cooling). Building technologies and smart meters can allow commercial and residential energy users to visualize the impact their energy use can have in their workplaces or homes. Advanced real-time energy metering can help people save energy through their actions.
Another approach towards energy conservation is the implementation of ECMs in commercial buildings, which often employ Energy Service Companies (ESCOs) experienced in energy performance contracting. This industry has been around since the 1970s and is more prevalent than ever today. The US-based organization EVO (Efficiency Valuation Organization) has created a set of guidelines for ESCOs to adhere to in evaluating the savings achieved by ECMs. These guidelines are called the International Performance Measurement and Verification Protocol (IPMVP).
Energy efficiency can also be achieved by upgrading certain aspects of existing buildings. Making thermal improvements by adding insulation to crawl spaces and ensuring no leaks achieves an efficient building envelope, reducing the need for mechanical systems to heat and cool the space. High-performance insulation is also supported by adding double/triple-glazed windows to minimize thermal heat transmission. Minor upgrades in existing buildings include changing mixers to low flow greatly aids in water conservation, changing light bulbs to LED lights results in 70-90% less energy consumption than a standard incandescent or C.F.L. bulb, changing inefficient appliances with Energy Star-rated appliances will consume less energy, and finally adding vegetation in the landscape surrounding the building to function as a shading element. A window windcatcher can reduce the total energy use of a building by 23.3%.
Energy conservation through users' behaviors requires understanding household occupants' lifestyle, social, and behavioral factors in analyzing energy consumption. This involves one-time investments in energy efficiency, such as purchasing new energy-efficient appliances or upgrading the building insulation without curtailing economic utility or the level of energy services, and energy curtailment behaviors which are theorized to be driven more by social-psychological factors and environmental concerns in comparison to the energy efficiency behaviors. Replacing existing appliances with newer and more efficient ones leads to energy efficiency as less energy is wasted throughout. Overall, energy efficiency behaviors are identified more with one-time, cost-incurring investments in efficient appliances and retrofits, while energy curtailment behaviors include repetitive, low-cost energy-saving efforts.
To identify and optimize residential energy use, conventional and behavioral economics, technology adoption theory and attitude-based decision-making, social and environmental psychology, and sociology must be analyzed. The techno-economic and psychological literature analysis focuses on the individual attitude, behavior, and choice/context/external conditions. In contrast, the sociological literature relies more on the energy consumption practices shaped by the social, cultural, and economic factors in a dynamic setting.
New buildings
Many steps can be taken toward energy conservation and efficiency when designing new buildings. Firstly, the building can be designed to optimize building performance by having an efficient building envelope with high-performing insulation and window glazing systems, window facades strategically oriented to optimize daylighting, shading elements to mitigate unwanted glare, and passive energy systems for appliances. In passive solar building designs, windows, walls, and floors are made to collect, store, and distribute solar energy in the form of heat in the winter and reject solar heat in the summer.
The key to designing a passive solar building is to best take advantage of the local climate. Elements to be considered include window placement and glazing type, thermal insulation, thermal mass, and shading. Optimizing daylighting can decrease energy waste from incandescent bulbs, windows, and balconies, allow natural ventilation, reduce the need for heating and cooling, low flow mixers aid in water conservation, and upgrade to Energy star rated appliances consume less energy. Designing a building according to LEED guidelines while incorporating smart home technology can help save a lot of energy and money in the long run. Passive solar design techniques can be applied most easily to new buildings, but existing buildings can be retrofitted.
Mainly, energy conservation is achieved by modifying user habits or providing an energy-saving recommendation of curtailing an appliance or scheduling it to low-price energy tariff hours. Besides changing user habits and appliance control, identifying irrelevant appliances concerning user activities in smart homes saves energy. Smart home technology can advise users on energy-saving strategies according to their behavior, encouraging behavioral change that leads to energy conservation. This guidance includes reminders to turn off lights, leakage sensors to prevent plumbing issues, running appliances on off-peak hours, and smart sensors that save energy. Such technology learns user-appliance activity patterns, gives a complete overview of various energy-consuming appliances, and can provide guidance to improve these patterns to contribute to energy conservation. As a result, they can strategically schedule appliances by monitoring the energy consumption profiles of the appliances, schedule devices to the energy-efficient mode, or plan to work during off-peak hours.
Appliance-oriented approaches emphasize appliance profiling, curtailing, and scheduling to off-peak hours, as supervision of appliances is key to energy preservation. It usually leads to appliance curtailment in which an appliance is either scheduled to work another time or is turned off. Appliance curtailment involves appliance recognition, activity-appliances model, unattended appliance detection, and energy conservation service. The appliance recognition module detects active appliances to identify the activities of smart home users. After identifying users' activities, the association between the functional appliances and user activities is established. The unattended appliance detection module looks for active appliances but is unrelated to user activity. These functional appliances waste energy and can be turned off by providing recommendations to the user.
Based on the smart home recommendations, users can give weight to certain appliances that increase user comfort and satisfaction while conserving energy. Energy consumption models of energy consumption of appliances and the level of comfort they create can balance priorities among smart home comfort levels and energy consumption. According to Kashimoto, Ogura, Yamamoto, Yasumoto, and Ito, the energy supply reduces based on the historical state of the appliance and increases according to the comfort level requirement of the user, leading to a targeted energy-saving ratio. Scenarios-based energy consumption can be employed as a strategy for energy conservation, with each scenario encompassing a specific set of rules for energy consumption.
Transportation
Transporting people, goods, and services represented 29% of U.S. energy consumption in 2007. The transportation sector also accounted for about 33% of U.S. carbon dioxide emissions in 2006, with highway vehicles accounting for about 84% of that, making transportation an essential target for addressing global climate change (E.I.A., 2008). Suburban infrastructure evolved during an age of relatively easy access to fossil fuels, leading to transportation-dependent living systems.[citation needed] The amount of energy used to transport people to and from a facility, whether they are commuters, customers, vendors, or homeowners, is known as the transportation energy intensity of the building. Land is developing at a faster rate than population growth, leading to urban sprawl and, therefore, high transportation energy intensity as more people need to commute longer distances to jobs. As a result, the location of a building is essential in decreasing embodied emissions.
In transportation, state and local efforts in energy conservation and efficiency measures tend to be more targeted and smaller in scale. However, with more robust fuel economy standards, new targets for the use of alternative transportation fuels, and new efforts in electric and hybrid electric vehicles, EPAct05 and EISA provide a new set of national policy signals and financial incentives to the private sector and state and local governments for the transportation sector. Zoning reforms that allow greater urban density and designs for walking and bicycling can greatly reduce energy consumed for transportation. Many Americans work in jobs that allow for remote work instead of commuting daily, which is a significant opportunity to conserve energy.[citation needed] Intelligent transportation systems (ITS) provide a solution to traffic congestion and C.E.s caused by increased vehicles. ITS combines improvements in information technology and systems, communications, sensors, controllers, and advanced mathematical methods with the traditional world of transportation infrastructure. It improves traffic safety and mobility, reduces environmental impact, promotes sustainable transportation, and increases productivity. The ITS strengthens the connection and cooperation between people, vehicles, roads, and the environment while improving road capacity, reducing traffic accidents, and improving transportation efficiency and safety by alleviating traffic congestion and reducing pollution. It makes full use of traffic information as an application service, which can enhance the operational efficiency of existing traffic facilities.
The most significant energy-saving potential is that there are the most problems in urban transportation in various countries, such as management systems, policies and regulations, planning, technology, operation, and management mechanism. Improvements in one or several aspects will improve road transportation. Efficiency has a positive impact, which leads to the improvement of the urban traffic environment and efficiency.
In addition to ITS, transit-oriented development (T.O.D.) significantly improves transportation in urban areas by emphasizing density, proximity to transit, diversity of uses, and streetscape design. Density is important for optimizing location and is a way to cut down on driving. Planners can regulate development rights by exchanging them from ecologically sensitive areas to growth-friendly zones according to density transfer procedures. Distance is defined as the accessibility of rail and bus transits, which serve as deterrents for driving. For transit-oriented development to be feasible, transportation stops must be close to where people live. Diversity refers to mixed-use areas that offer essential services close to homes and offices and include residential spaces for different socioeconomic categories, commercial and retail. This creates a pedestrian shed where one area can meet people's everyday needs on foot. Lastly, the streetscapes design involves minimal parking and walkable areas that calm traffic. Generous parking incentivizes people to use cars, whereas minimal and expensive parking deters commuters. At the same time, streetscapes can be designed to incorporate bicycling lanes and designated bicycle paths and trails. People may commute by bicycle to work without being concerned about their bicycles becoming wet because of covered bicycle storage. This encourages commuters to use bicycles rather than other modes of transportation and contributes to energy saving. People will be happy to walk a few blocks from a train stop if there are attractive, pedestrian-friendly outdoor spaces nearby with good lighting, park benches, outdoor tables at cafés, shade tree plantings, pedestrian courts that are blocked off to cars, and public internet connection. Additionally, this strategy calms traffic, improving the intended pedestrian environment.
New urban planning schemes can be designed to improve connectivity in cities through networks of interconnected streets that spread out traffic flow, slow down vehicles, and make walking more pleasant. By dividing the number of road links by the number of road nodes, the connectivity index is calculated. The higher the connectivity index, the greater the route choices and the better the pedestrian access. Realizing the transportation impacts associated with buildings allows commuters to take steps toward energy conservation. Connectivity encourages energy-conserving behaviors as commuters use fewer cars, walk and bike more, and use public transportation. For commuters who do not have the option of public transportation, smaller vehicles that are hybrid or have better mileage can be used.
Consumer products
Homeowners implementing ECMs in their residential buildings often start with an energy audit. This is a way homeowners look at what areas of their homes are using, and possibly losing energy. Residential energy auditors are accredited by the Building Performance Institute (BPI) or the Residential Energy Services Network (RESNET). Homeowners can hire a professional or do it themselves or use a smartphone to help do an audit.
Energy conservation measures are often combined into larger guaranteed Energy Savings Performance Contracts to maximize energy savings while minimizing disruption to building occupants by coordinating renovations. Some ECMs cost less to implement yet return higher energy savings. Traditionally, lighting projects were a good example of "low hanging fruit" that could be used to drive implementation of more substantial upgrades to HVAC systems in large facilities. Smaller buildings might combine window replacement with modern insulation using advanced building foams to improve energy for performance. Energy dashboard projects are a new kind of ECM that relies on the behavioral change of building occupants to save energy. When implemented as part of a program, case studies, such as that for the DC Schools, report energy savings up 30%. Under the right circumstances, open energy dashboards can even be implemented for free to improve upon these savings even more.
Consumers are often poorly informed of the savings of energy-efficient products. A prominent example of this is the energy savings that can be made by replacing an incandescent light bulb with a more modern alternative. When purchasing light bulbs, many consumers opt for cheap incandescent bulbs, failing to take into account their higher energy costs and lower lifespans when compared to modern compact fluorescent and LED bulbs. Although these energy-efficient alternatives have a higher upfront cost, their long lifespan and low energy use can save consumers a considerable amount of money. The price of LED bulbs has also been steadily decreasing in the past five years due to improvements in semiconductor technology. Many LED bulbs on the market qualify for utility rebates that further reduce the price of the purchase to the consumer. Estimates by the U.S. Department of Energy state that widespread adoption of LED lighting over the next 20 years could result in about $265 billion worth of savings in United States energy costs.
The research one must put into conserving energy is often too time-consuming and costly for the average consumer when there are cheaper products and technology available using today's fossil fuels. Some governments and NGOs are attempting to reduce this complexity with Eco-labels that make differences in energy efficiency easy to research while shopping.
To provide the kind of information and support people need to invest money, time and effort in energy conservation, it is important to understand and link to people's topical concerns. For instance, some retailers argue that bright lighting stimulates purchasing. However, health studies have demonstrated that headache, stress, blood pressure, fatigue and worker error all generally increase with the common over-illumination present in many workplace and retail settings. It has been shown that natural daylighting increases productivity levels of workers, while reducing energy consumption.
In warm climates where air conditioning is used, any household device that gives off heat will result in a larger load on the cooling system. Items such as stoves, dishwashers, clothes dryers, hot water, and incandescent lighting all add heat to the home. Low-power or insulated versions of these devices give off less heat for the air conditioning to remove. The air conditioning system can also improve efficiency by using a heat sink that is cooler than the standard air heat exchanger, such as geothermal or water.
In cold climates, heating air and water is a major demand for household energy use. Significant energy reductions are possible by using different technologies. Heat pumps are a more efficient alternative to electrical resistance heaters for warming air or water. A variety of efficient clothes dryers are available, and the clothes lines requires no energy- only time. Natural-gas (or bio-gas) condensing boilers and hot-air furnaces increase efficiency over standard hot-flue models. Standard electric boilers can be made to run only at hours of the day when they are needed by means of a time switch. This decreases energy use vastly. In showers, a semi-closed-loop system could be used. New construction implementing heat exchangers can capture heat from wastewater or exhaust air in bathrooms, laundry, and kitchens.
In both warm and cold climate extremes, airtight thermal insulated construction is the largest factor determining the efficiency of a home. Insulation is added to minimize the flow of heat to or from the home, but can be labor-intensive to retrofit to an existing home.
Energy conservation by countries
Asia
Although energy efficiency is expected to play a vital role in cost-effectively cutting energy demand, only a small part of its economic potential is exploited in Asia. Governments have implemented a range of subsidies such as cash grants, cheap credit, tax exemptions, and co-financing with public-sector funds to encourage energy-efficiency initiatives across several sectors. Governments in the Asia-Pacific region have implemented a range of information provision and labeling programs for buildings, appliances, and the transportation and industrial
sectors. Information programs can simply provide data, such as fuel-economy labels, or actively seek to encourage behavioral changes, such as Japan's Cool Biz campaign that encourages setting air conditioners at 28-degrees Celsius and allowing employees to dress casually in the summer.
China's government has launched a series of policies since 2005 to effectively promote the goal of reducing energy-saving emissions; however, road transportation, the fastest-growing energy-consuming sector in the transportation industry, lacks specific, operational, and systematic energy-saving plans. Road transportation is the highest priority to achieve energy conservation effectively and reduce emissions, particularly since social and economic development has entered the "new norm" period. Generally speaking, the government should make comprehensive plans for conservation and emissions reduction in the road transportation industry within the three dimensions of demand, structure, and technology. For example, encouraging trips using public transportation and new transportation modes such as car-sharing and increasing investment in new energy vehicles in structure reform, etc.
European Union
At the end of 2006, the European Union (EU) pledged to cut its annual consumption of primary energy by 20% by 2020. The EU Energy Efficiency Directive 2012 mandates energy efficiency improvements within the EU.
As part of the EU's SAVE program, aimed at promoting energy efficiency and encouraging energy-saving behavior, the Boiler Efficiency Directive specifies minimum levels of efficiency for boilers using liquid or gaseous fuels.
There is steady progress on energy regulation implementation in Europe, North America, and Asia, with the highest number of building energy standards being adopted and implemented. Moreover, the performance of Europe is highly encouraging concerning energy standard activities. They recorded the highest percentage of mandatory energy standards compared to the other five regions.
In 2050, energy savings in Europe can reach 67% of the 2019 baseline scenario, amounting to a demand of 361 Mtoe in an "energy efficiency first" societal trend scenario. A condition is that there be no rebound effect, for otherwise the savings are 32% only or energy use may even increase by 42% if techno-economic potentials are not realized.
Germany has reduced its primary energy consumption by 11% from 1990 to 2015 and set itself goals of reducing it by 30% by the year 2030 and by 50% by the year 2050 in comparison to the level of 2008.
India
The Petroleum Conservation Research Association (PCRA) is an Indian governmental body created in 1978 that engages in promoting energy efficiency and conservation in every walk of life. In the recent past, PCRA has organised mass media campaigns in television, radio, and print media. This is an impact-assessment survey by a third party that revealed that due to these larger campaigns by PCRA, the public's overall awareness level has gone up leading to the saving of fossil fuels worth crores of rupees, besides reducing pollution.
The Bureau of Energy Efficiency is an Indian government organization created in 2001 that is responsible for promoting energy efficiency and conservation.
Protection and Conservation of Natural Resources are done by Community Natural Resources Management (CNRM).
Iran
Supreme leader of Iran Ali Khamenei had regularly criticized energy administration and high fuel consumption.
Japan
Since the 1973 oil crisis, energy conservation has been an issue in Japan. All oil-based fuel is imported, so domestic sustainable energy is being developed.
The Energy Conservation Center promotes energy efficiency in every aspect of Japan. Public entities are implementing the efficient use of energy for industries and research. It includes projects such as the Top Runner Program. In this project, new appliances are regularly tested on efficiency, and the most efficient ones are made the standard.
Middle East
The Middle East holds 40% of the world's crude oil reserves and 23% of its natural gas reserves. Conservation of domestic fossil fuels is, therefore, a legitimate priority for the Gulf countries, given domestic needs as well as the global market for these products. Energy subsidies are the chief barrier to conservation in the Gulf. Residential electricity prices can be a tenth of U.S. rates. As a result, increased tariff revenues from gas, electricity, and water sales would encourage investment in natural gas exploration and production and generation capacity, helping to alleviate future shortages.
Households in the MENA region are responsible for 53% of energy use in Saudi Arabia and 57% of the UAE's ecological footprint. This is partially due to poorly designed and constructed buildings, mainly under a cheap energy model that has left them without contemporary control technology or even proper insulation and efficient appliances. Building energy consumption can be cut by 20% under a combination of insulation, efficient windows and appliances, shading, reflective roofing, and a host of automated controls that adjust energy use.
Governments could also set minimum energy efficiency and water use standards on importing appliances sold inside their countries, effectively banning the sale of inefficient air conditioners, dishwashers, and washing machines. Administration of the laws would essentially be a function of national customs services. Governments could go further, offering incentives – or mandates – that air conditioners of a certain age be replaced.
Lebanon
In Lebanon and since 2002 The Lebanese Center for Energy Conservation (LCEC) has been promoting the development of efficient and rational uses of energy and the use of renewable energy at the consumer level. It was created as a project financed by the International Environment Facility (GEF) and the Ministry of Energy Water (MEW) under the management of the United Nations Development Programme (UNDP) and gradually established itself as an independent technical national center although it continues to be supported by the United Nations Development Programme (UNDP) as indicated in the Memorandum of Understanding (MoU) signed between MEW and UNDP on 18 June 2007.
Nepal
Until recently, Nepal has been focusing on the exploitation of its huge water resources to produce hydropower. Demand-side management and energy conservation were not in the focus of government action. In 2009, bilateral Development Cooperation between Nepal and the Federal Republic of Germany has agreed upon the joint implementation of the "Nepal Energy Efficiency Programme". The lead executing agencies for the implementation are the Water and Energy Commission Secretariat (WECS). The aim of the program is the promotion of energy efficiency in policymaking, in rural and urban households as well as in the industry.
Due to the lack of a government organization that promotes energy efficiency in the country, the Federation of Nepalese Chambers of Commerce and Industry (FNCCI) has established the Energy Efficiency Centre under his roof to promote energy conservation in the private sector. The Energy Efficiency Centre is a non-profit initiative that is offering energy auditing services to the industries. The centre is also supported by Nepal Energy Efficiency Programme of Deutsche Gesellschaft für Internationale Zusammenarbeit.
A study conducted in 2012 found out that Nepalese industries could save 160,000-megawatt hours of electricity and 8,000 terajoules of thermal energy (like diesel, furnace oil, and coal) every year. These savings are equivalent to annual energy cost cut of up to 6.4 Billion Nepalese Rupees.
As a result of Nepal Economic Forum 2014, an economic reform agenda in the priority sectors was declared focusing on energy conservation among others. In the energy reform agenda, the government of Nepal gave the commitment to introduce incentive packages in the budget of the fiscal year 2015/16 for industries that practices energy efficiency or use efficient technologies (incl. cogeneration).
New Zealand
In New Zealand the Energy Efficiency and Conservation Authority is the Government Agency responsible for promoting energy efficiency and conservation. The Energy Management Association of New Zealand is a membership-based organization representing the New Zealand energy services sector, providing training and accreditation services with the aim of ensuring energy management services are credible and dependable.
Nigeria
In Nigeria, the Lagos State Government is encouraging Lagosians to imbibe an energy conservation culture. In 2013, the Lagos State Electricity Board (LSEB) ran an initiative tagged "Conserve Energy, Save Money" under the Ministry of Energy and Mineral Resources. The initiative is designed to sensitize Lagosians around the theme of energy conservation by influencing their behavior through do-it-yourself tips. In September 2013, Governor Babatunde Raji Fashola of Lagos State and the campaign ambassador, rapper Jude "MI" Abaga participated in the Governor's conference video call on the topic of energy conservation.
In addition to this, during the month of October (the official energy conservation month in the state), LSEB hosted experience centers in malls around Lagos State where members of the public were encouraged to calculate their household energy consumption and discover ways to save money using a consumer-focused energy app. To get Lagosians started on energy conservation, solar lamps and energy-saving bulbs were also handed out.
In Kaduna State, the Kaduna Power Supply Company (KAPSCO) ran a program to replace all light bulbs in Public Offices; fitting energy-saving bulbs in place of incandescent bulbs. KAPSCO is also embarking on an initiative to retrofit all conventional streetlights in the Kaduna Metropolis to LEDs which consume much less energy.
Sri Lanka
Sri Lanka currently consumes fossil fuels, hydro power, wind power, solar power and dendro power for their day to day power generation. The Sri Lanka Sustainable Energy Authority is playing a major role regarding energy management and energy conservation. Today, most industries are requested to reduce their energy consumption by using renewable energy sources and optimizing their energy usage.
Turkey
Turkey aims to decrease by at least 20% the amount of energy consumed per GDP of Turkey by 2023 (energy intensity).
United Kingdom
The Department for Business, Energy and Industrial Strategy is responsible for promoting energy efficiency in the United Kingdom.
United States
The United States is currently the second-largest single consumer of energy, following China. The U.S. Department of Energy categorizes national energy use in four broad sectors: transportation, residential, commercial, and industrial.
About half of U.S. energy consumption in the transportation and residential sectors is primarily controlled by individual consumers. In the typical American home, space heating is the most significant energy use, followed by electrical technology (appliances, lighting, and electronics) and water heating. Commercial and industrial energy expenditures are determined by businesses entities and other facility managers. National energy policy has a significant effect on energy usage across all four sectors.
Since the oil embargoes and price spikes of the 1970s, energy efficiency and conservation have been fundamental tenets of U.S. energy policy. The scope of energy conservation and efficiency measures has been broadened throughout time by U.S. energy policies and programs, including federal and state legislation and regulatory actions, to include all economic sectors and all geographical areas of the nation. Measurable energy conservation and efficiency gains in the 1980s led to the 1987 Energy Security Report to the President (DOE, 1987) that "the United States uses about 29 quads less energy in a year today than it would have if our economic growth since 1972 had been accompanied by the less- efficient trends in energy use we were following at that time" The DOE Strategy and the legislation included new strategies for strengthening conservation and efficiency in buildings, industry, and electric power, such as integrated resource planning for electric and natural gas utilities and efficiency and labeling standards for 13 residential appliances and equipment categories. Lack of a national consensus on how to proceed interfered with developing a consistent and comprehensive approach. Nevertheless, the Energy Policy Act of 2005 (EPAct05; 109th U.S. Congress, 2005) contained many new energy conservation and efficiency provisions in the transportation, buildings, and electric power sectors.
The most recent federal law to increase and broaden U.S. energy conservation and efficiency laws, programs, and practices is the Energy Independence and Security Act of 2007 (EISA). Over the next few decades, it is anticipated that EISA will significantly reduce energy use because it has more standards and targets than previous legislation. Both acts reinforce the importance of lighting and appliance efficiency programs, targeting an additional 70% lighting efficiency by 2020, introducing 45 new standards for appliances, and setting up new standards for vehicle fuel economy. The Federal Government is also promoting a new 30% model code for efficient building practices in the construction industry. Additionally, according to the American Council for an Energy-Efficient Economy (ACEEE), the EISA's energy efficiency and conservation initiatives will cut carbon dioxide emissions by 9% in 2030. These requirements cover appliance and lighting efficiency, energy savings in homes, businesses, and public buildings, the effectiveness of industrial manufacturing facilities, and the efficiency of electricity supply and end use. Expectations are high for increased energy savings due to these initiatives, which have already started contributing to new federal, state, and local laws, programs, and practices across the U.S.
The development and use of alternative transportation fuels (whose supply is expected to expand by 15% by 2022), renewable energy sources, and other clean energy technologies have also received more attention and financial incentives. Recent policies also emphasize growing the use of coal with carbon capture and sequestration, solar, wind, nuclear, and other clean energy sources.
In February 2023 the United States Department of Energy proposed a set of new energy efficiency standards that, if implemented, will save to users of different electric machines in the United States around $3,500,000,000 per year and will reduce by the year 2050 carbon emissions by the same amount as emitted by 29,000,000 houses.
Mechanisms to promote conservation
Governmental mechanisms
Governments at the national, regional, and local levels may implement policies to promote energy efficiency. Building energy rules can cover the energy consumption of an entire structure or specific building components, like heating and cooling systems. They represent some of the most frequently used instruments for energy efficiency improvements in buildings and can play an essential role in improving energy conservation in buildings. There are multiple reasons for the growth of these policies and programs since the 2000s, including cost savings as energy prices increased, growing concern about the environmental impacts of energy use, and public health concerns. The policies and programs related to energy conservation are critical to establishing safety and performance levels, assisting in consumer decision-making, and explicitly identifying energy-conserving and energy-efficient products. Recent policies include new programs and regulatory incentives that call for electric and natural gas utilities to increase their involvement in delivering energy-efficiency products and services to their customers. For example, the National Action Plan for Energy Efficiency (NAPEE) is a public-private partnership created in response to EPAct05 that brings together senior executives from electric and natural gas utilities, state public utility commissions, other state agencies, and environmental and consumer groups representing every region of the country. The success of building energy regulation in effectively controlling energy consumption in the building sector will be, to a great extent, associated with the adopted energy performance indicator and the promoted energy assessment tools. It can help overcome significant market barriers and ensure cost-effective energy efficiency opportunities are incorporated into new buildings. This is crucial in emerging nations where new constructions are rapidly developing, and market and energy prices sometimes discourage efficient technologies. The building energy standards development and adoption showed that 42% of emerging developing countries surveyed have no energy standard in place, 20% have mandatory, 22% have mixed, and 16% proposed.
The major impediments to implementing building energy regulations for energy conservation and efficiency in the building sector are institutional barriers and market failures rather than technical problems, as pointed out by Nature Publishing Group (2008). Among these, Santamouris (2005) includes a lack of owners' awareness of energy conservation benefits, building energy regulations benefits, insufficient awareness and training of property managers, builders, and engineers, and a lack of specialized professionals to ensure compliance. Based on the above information, the development and adoption of building energy regulations, such as energy standards in developing countries, are still far behind compared to building energy regulation adoption and implementation in developed countries.
Building energy standards are starting to appear in Africa, Latin America, and Middle East regions, even though this is a new development going to the result obtained in this study. The level of progress on energy regulation activities in Africa, Latin America, and the Middle East is increasing, given the higher number of energy standard proposals recorded in these regions. According to the Royal Institute of Chartered Surveyors, several codes are being developed in developing countries with UNDP and GEF support. These typically include elemental and integrated routes to compliance, such as a fundamental method defining the performance requirements of specific building elements. However, they are still far behind in building energy regulation development, implementation, and compliance compared to developed nations. Also, decision-making regarding energy regulations is still from the government only, with little or no input from non-governmental entities. As a result, lower energy regulation development is recorded in these regions compared to regions with integrated and consensus approaches.
Additionally, there is growing government involvement in the development and implementation of energy standards; 62% of Middle Eastern respondents, 45% of African respondents, and 43% of Latin American respondents indicated that existing government agencies, such as building agencies and energy agencies, are involved in implementing building energy standards in their respective nations, as opposed to 20% of European respondents, 38% of Asian respondents, and 0% of North American respondents, who indicated the involvement of existing agencies. Several North African nations, like Tunisia and Egypt, have programs relating to building energy standards, while Algeria and Morocco are now seeking to establish building energy standards, according to the Royal Institute of Chartered Surveyors. Similarly, Egypt's residential energy standard became law in 2005, and their commercial standard was anticipated to follow. The standards provide minimal performance requirements for applications involving air conditioners and other appliances and elemental and integrated pathways. However, it was claimed that enforcement legislation was still required in 2005. Additionally, Morocco launched a program in 2005 to create thermal energy requirements for construction, concentrating on the hospitality, healthcare, and communal housing industries.
Mandatory energy standards
Energy standards are the primary way governments foster energy efficiency as a public good. A recognized standard-setting organization prepares a standard. Standards developed by recognized organizations are often used as the basis for the development and updating of building codes. They allow innovative approaches and techniques to achieve effective energy use and optimum building performance. Besides, it encourages cost-effective energy use of building components, including building envelope, lighting, HVAC, electrical installations, lift and escalator, and other equipment. Energy-efficiency standards have been expanded and strengthened for appliances, building equipment, and lighting. For example, appliances and equipment standards are being developed for a new range of devices, including reduction goals for "standby" power that keeps consumer electronic products in a ready-to-use mode. Some devices require certain levels of energy performance from a car, building, appliance, or other technical equipment. If the vehicle, building, appliance, or equipment does not meet these standards, there may be restrictions on its sale or rent. In the U.K., these are called "minimum energy efficiency standards" or MEES and were applied to privately rented accommodation in 2019.
Energy codes and standards are vital in setting minimum energy-efficient design and construction requirements. Buildings should be developed following energy standards to save energy efficiently. They specify uniform requirements for new buildings, additions, and modifications. National organizations like the American Society of Heating, Refrigerating, and Air-Conditioning Engineers publish the standards (ASHRAE). State and municipal governments frequently use energy standards as the technical foundation for creating their energy regulations. Some energy standards are written in a mandatory and enforceable language, making it simple for governments to add the standards' provisions directly to their laws or regulations.
The American Society of Heating, Refrigeration, and Air-Conditioning Engineers (ASHRAE) is a well-known example of a standard-making organization. This organization dates to the nineteenth century and is international in its membership (About ASHRAE 2018). Examples of ASHRAE standards that relate to energy conservation in the built environment are:
Standard 62.1-2016 Ventilation for Acceptable Indoor Air Quality
Standard 90.2-2007 Energy Efficient Design of Low-Rise Residential Buildings
Standard 100-2018 Energy Efficiency in Existing Buildings
Standard 189.1-2014 Standard for the Design of High-Performance Green Buildings
The Residential Energy Services Network is a crucial benchmark for energy reduction (RESNET). The Home Energy Rating System (HERS) of RESNET, which is based on the International Code Council's (ICC) energy code, is used to rate home energy consumption with a standard numerical scale that examines factors in home energy use (About HERS 2018). The American National Standards Institute (ANSI) has acknowledged the HERS assessment system as a national benchmark for evaluating energy efficiency. The International Energy Conservation Code (IECC) of the ICC requires an energy rating index, and the main index used in the residential building sector is HERS. The mortgage financing sector makes substantial use of the HERS index. A home's expected energy usage may impact the available mortgage funds based on the HERS score, with more energy-efficient, lower energy-using homes potentially qualifying for a better mortgage rate or amount.
Mandatory energy labels
Many governments require that a car, building, or piece of equipment be labeled with its energy performance. This allows consumers and customers to see the energy implications of their choices, but does not restrict their choices or regulate which products are available to choose from.
It also does not enable easily comparing options (such as being able to filter by energy-efficiency in online stores) or have the best energy-conserving options accessible (such as energy-conserving options being available in the frequented local store). (An analogy would be nutritional labeling on food.)
A trial of estimated financial energy cost of refrigerators alongside EU energy-efficiency class (EEEC) labels online found that the approach of labels involves a trade-off between financial considerations and higher cost requirements in effort or time for the product-selection from the many available options which are often unlabelled and don't have any EEEC-requirement for being bought, used or sold within the EU. Moreover, in this one trial the labeling was ineffective in shifting purchases towards more sustainable options.
Energy taxes
Some countries employ energy or carbon taxes to motivate energy users to reduce their consumption. Carbon taxes can motivate consumption to shift to energy sources with fewer emissions of carbon dioxide, such as solar power, wind power, hydroelectricity or nuclear power while avoiding cars with combustion engines, jet fuel, oil, fossil gas and coal. On the other hand, taxes on all energy consumption can reduce energy use across the board while reducing a broader array of environmental consequences arising from energy production. The state of California employs a tiered energy tax whereby every consumer receives a baseline energy allowance that carries a low tax. As for usage increases above that baseline, the tax increases drastically. Such programs aim to protect poorer households while creating a larger tax burden for high energy consumers.
Developing countries specifically are less likely to impose policy measures that slow carbon emissions as this would slow their economic development. These growing countries may be more likely to support their own economic growth and support their citizens rather than decreasing their carbon emissions.
The following pros and cons of a carbon tax help one to see some of the potential effects of a carbon tax policy.
Pros of Carbon Tax include:
Making polluters pay the external cost of carbon emissions.
Enables greater social efficiency as all citizens pay the full social cost.
Raises revenue which can, in turn, be spent on mitigating the effects of pollution.
Encourages firms and consumers to search for non-carbon producing alternatives (ex. solar power, wind power, hydroelectricity, or nuclear power).
Reduces environmental costs associated with excess carbon pollution.
Cons of Carbon Tax include:
Businesses claim higher taxes which can discourage investment and economic growth.
A carbon tax may encourage tax evasion as firms may pollute in secret to avoid a carbon tax.
It may be difficult to measure external costs and how much the carbon tax should truly be.
There are administration costs in measuring pollution and collecting the associated tax.
Firms may move production to countries in which there is no carbon tax.
Non-governmental mechanisms
Voluntary energy standards
Another aspect of promoting energy efficiency is using the Leadership in Energy and Environmental Design (LEED) voluntary building design standards. This program is supported by the US Green Building Council. The "Energy and Atmosphere" Prerequisite applies to energy issues, it focuses on energy performance, renewable energy, and other. See green building.
See also
Annual fuel use efficiency
Domestic energy consumption
Climate change mitigation
Efficient energy use
Energy conservation law
Energy crisis
Energy monitoring and targeting
Energy recovery
Energy recycling
Energy storage
Green computing
High-temperature insulation wool
Induced demand
Jevons paradox
Khazzoom–Brookes postulate
List of energy storage projects
List of low-energy building techniques
Low Carbon Communities
Marine fuel management
Minimum energy performance standard
One Watt Initiative
Overconsumption
Passive house
Renewable heat
Smart grid
Superinsulation
Thermal efficiency
Water heat recycling
Window film
Zero-energy building
References
Further reading
GA Mansoori, N Enayati, LB Agyarko (2016), Energy: Sources, Utilization, Legislation, Sustainability, Illinois as Model State, World Sci. Pub. Co.,
Alexeew, Johannes; Carolin Anders and Hina Zia (2015): Energy-efficient buildings – a business case for India? An analysis of incremental costs for four building projects of the Energy-Efficient Homes Programme. Berlin/New Delhi: Adelphi/TERI
Gary Steffy, Architectural Lighting Design, John Wiley and Sons (2001)
Lumina Technologies, Analysis of energy consumption in a San Francisco Bay Area research office complex, for the (confidential) owner, Santa Rosa, Ca. 17 May 1996
External links
bigEE – Your guide to energy efficiency in buildings
Energy saving advice and grants for UK consumers
Energy efficiency and renewable energy at the U.S. Department of Energy
EnergyStar – for commercial buildings and plants
Ulrich Hottelet: Want to Save the Earth? Pick a Clothesline, Atlantic Times, November 2007
Energy Efficiency in Asia and the Pacific Asian Development Bank
Energy Saving Tips Save up to $100 on power bills per year by switching off any unused appliances.
Conservation
Conservation
Conservation
Sustainable energy | Energy conservation | Environmental_science | 9,865 |
34,575,941 | https://en.wikipedia.org/wiki/Volume%20solid | Volume solid is the volume of paint after it has dried.
Paint
This is different than the weight solid. Paint may contain solvent, resin, pigments, and additives. Many paints do not contain any solvent. After applying the paint, the solid portion will be left on the substrate. Volume solid is the term that indicates the solid proportion of the paint on a volume basis. For example, if the paint is applied in a wet film at a 100 μm thickness and the volume solid of paint is 50%, then the dry film thickness (DFT) will be 50 μm as 50% of the wet paint has evaporated. Suppose the volume solid is 100%, and the wet film thickness is also 100 μm. Then after complete drying of the paint, the DFT will be 100 μm because no solvent will be evaporated.
This is an important concept when using paint industrially to calculate the cost of painting. It can be said that it is the real volume of paint.
Here is the formula by which one can calculate the volume solid of paint,
(Total sum by volume of each solid ingredient in paint x 100%)/ Total sum by volume of each ingredient in paint.
A simple method that anyone can do to determine volume solids empirically is to apply paint to a steel surface with an application knife and measure the wet film thickness. Then cure the paint and measure the dry film thickness. The percentage of dry to wet represents the percentage of volume solids.
In earlier days, the volume solid was measured by a disc method but now a sophisticated instrument is also available which takes only a drop of paint to check the volume solid.
Understanding volume solids allows knowing the true cost of different coatings and how much paint is used to perform its function. Generally, more expensive paints have a higher volume of solids and provide better coverage.
References
Painting
Materials science | Volume solid | Physics,Materials_science,Engineering | 380 |
5,297,278 | https://en.wikipedia.org/wiki/Dowling%20geometry | In combinatorial mathematics, a Dowling geometry, named after Thomas A. Dowling, is a matroid associated with a group. There is a Dowling geometry of each rank for each group. If the rank is at least 3, the Dowling geometry uniquely determines the group. Dowling geometries have a role in matroid theory as universal objects (Kahn and Kung, 1982); in that respect they are analogous to projective geometries, but based on groups instead of fields.
A Dowling lattice is the geometric lattice of flats associated with a Dowling geometry. The lattice and the geometry are mathematically equivalent: knowing either one determines the other. Dowling lattices, and by implication Dowling geometries, were introduced by Dowling (1973a,b).
A Dowling lattice or geometry of rank n of a group G is often denoted Qn(G).
The original definitions
In his first paper (1973a) Dowling defined the rank-n Dowling lattice of the multiplicative group of a finite field F. It is the set of all those subspaces of the vector space Fn that are generated by subsets of the set E that consists of vectors with at most two nonzero coordinates. The corresponding Dowling geometry is the set of 1-dimensional vector subspaces generated by the elements of E.
In his second paper (1973b) Dowling gave an intrinsic definition of the rank-n Dowling lattice of any finite group G. Let S be the set {1,...,n}. A G-labelled set (T, α) is a set T together with a function α: T → G. Two G-labelled sets, (T, α) and (T, β), are equivalent if there is a group element, g, such that β = gα.
An equivalence class is denoted [T, α].
A partial G-partition of S is a set γ = {[B1,α1], ..., [Bk,αk]} of equivalence classes of G-labelled sets such that B1, ..., Bk are nonempty subsets of S that are pairwise disjoint. (k may equal 0.)
A partial G-partition γ is said to be ≤ another one, γ*, if
every block of the second is a union of blocks of the first, and
for each Bi contained in B*j, αi is equivalent to the restriction of α*j to domain Bi .
This gives a partial ordering of the set of all partial G-partitions of S. The resulting partially ordered set is the Dowling lattice Qn(G).
The definitions are valid even if F or G is infinite, though Dowling mentioned only finite fields and groups.
Graphical definitions
A graphical definition was then given by Doubilet, Rota, and Stanley (1972). We give the slightly simpler (but essentially equivalent) graphical definition of Zaslavsky (1991), expressed in terms of gain graphs.
Take n vertices, and between each pair of vertices, v and w, take a set of |G| parallel edges labelled by each of the elements of the group G. The labels are oriented, in that, if the label in the direction from v to w is the group element g, then the label of the same edge in the opposite direction, from w to v, is g−1. The label of an edge therefore depends on the direction of the edge; such labels are called gains. Also add to each vertex a loop whose gain is any value other than 1. (1 is the group identity element.) This gives a graph which is called GKno (note the raised circle). (A slightly different definition is needed for the trivial group; the added edges must be half edges.)
A cycle in the graph then has a gain. The cycle is a sequence of edges, e1e2···ek. Suppose the gains of these edges, in a fixed direction around the cycle, are g1, g2, ..., gk. Then the gain of the cycle is the product, g1g2···gk. The value of this gain is not completely well defined, since it depends on the direction chosen for the cycle and on which is called the "first" edge of the cycle. What is independent of these choices is the answer to the following question: is the gain equal to 1 or not? If it equals 1 under one set of choices, then it is also equal to 1 under all sets of choices.
To define the Dowling geometry, we specify the circuits (minimal dependent sets). The circuits of the matroid are
the cycles whose gain is 1,
the pairs of cycles with both gains not equal to 1, and which intersect in a single vertex and nothing else, and
the theta graphs in which none of the three cycles has gain equal to 1.
Thus, the Dowling geometry Qn(G) is the frame matroid (or bias matroid) of the gain graph GKno (the raised circle denotes the presence of loops).
Other, equivalent definitions are described in the article on gain graphs.
Characteristic polynomial
One reason for interest in Dowling lattices is that the characteristic polynomial is very simple. If L is the Dowling lattice of rank n of a finite group G having m elements, then
an exceptionally simple formula for any geometric lattice.
Generalizations
There is also a Dowling geometry, of rank 3 only, associated with each quasigroup; see Dowling (1973b). This does not generalize in a straightforward way to higher ranks. There is a generalization due to Zaslavsky (2012) that involves n-ary quasigroups.
References
Peter Doubilet, Gian-Carlo Rota, and Richard P. Stanley (1972), On the foundations of combinatorial theory (VI): The idea of generating function. In: Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability (Berkeley, Calif., 1970/71), Vol. II: Probability Theory, pp.\ 267–318. University of California Press, Berkeley, Calif., 1972.
T.A. Dowling (1973a), A q-analog of the partition lattice. Chapter 11 in: J.N. Srivastava et al., eds., A Survey of Combinatorial Theory (Proceedings of an International Symposium, Ft. Collins, Colo., 1971), pp. 101–115. North-Holland, Amsterdam, 1973.
T.A. Dowling (1973b), A class of geometric lattices based on finite groups. Journal of Combinatorial Theory, Series B, Vol. 14 (1973), pp. 61–86.
Kahn, Jeff, and Kung, Joseph P.S. (1982), Varieties of combinatorial geometries. Transactions of the American Mathematical Society, Vol. 271, pp. 485–499.
Thomas Zaslavsky (1991), Biased graphs. II. The three matroids. Journal of Combinatorial Theory, Series B, Vol. 51, pp. 46–72.
Thomas Zaslavsky (2012), Associativity in multary quasigroups: The way of biased expansions. "Aequationes Mathematicae", Vol. 83, no. 1, pp. 1–66.
Matroid theory
Finite groups
Finite fields | Dowling geometry | Mathematics | 1,549 |
62,520,275 | https://en.wikipedia.org/wiki/IBM%20drum%20storage | In addition to the drums used as main memory by IBM, e.g., IBM 305, IBM 650, IBM offered drum devices as secondary storage for the 700/7000 series and System/360 series of computers.
IBM 731
The IBM 731 is a discontinued storage unit used on the IBM 701. It has a storage capacity of 2,048 36-bit words (9,216 8-bit bytes).
IBM 732
The IBM 732 is a discontinued storage unit used on the IBM 702. It has a storage capacity of 60,000 6-bit characters (45,000 8-bit bytes).
IBM 733
The IBM 733 is a discontinued storage unit used on the IBM 704 and IBM 709. It has a storage capacity of 8192 36-bit words (36,864 8-bit bytes).
IBM 734
The IBM 734 is a discontinued storage unit used on the IBM 705 It has a storage capacity of 60,000 6-bit characters (45,000 8-bit bytes).
IBM 7320
The IBM 7320 is a discontinued storage unit manufactured by IBM announced on December 10, 1962 for the IBM 7090 and 7094 computer systems, was retained for the earliest System/360 systems as a count key data device, and was discontinued in 1965. The 7320 is a vertically mounted head-per-track device with 449 tracks, 400 data tracks, 40 alternate tracks, and 9 clock/format tracks. The rotational speed is 3,490 rpm, so the average rotational delay is 8.6 milliseconds.
Attachment to a 709x system is through an IBM 7909 Data Channel and an IBM 7631 File Control unit, which can attach up to five random-access storage units, a mix of 7320 and 1301 DASD. One or two 7631 controllers can attach to a computer system, but the system can still attach only a total of five DASD. When used with a 709x, a track holds 2,796 6-bit characters, and a 7320 unit holds 1,118,400 characters. Data transfer rate is 202,800 characters per second.
The 7320 attaches to a System/360 through a channel and an 2841 Storage Control unit. Each 2841 can attach up to eight 7320 devices. When used with System/360, a track holds 2,081 8-bit bytes, and a 7320 unit holds 878,000 bytes. Data transfer rate is 135,000 bytes per second.
The 7320 was superseded by the IBM 2301 in mid-1966.
IBM 2301
The IBM 2301 is a magnetic drum storage device introduced in the late 1960s to "provide large capacity, direct access storage for IBM System/360 Models 65, 67, 75, or 85." The vertically mounted drum rotates at around 3,500 revolutions per minute, and has a head-per-track access mechanism and a capacity of 4 MB. The 2301 has 800 physical tracks; four physical tracks make up one logical track which is read or written as a unit. The 200 logical tracks have 20,483 bytes each. The average access time is 8.6 ms, and the data transfer rate is 1,200,000 bytes per second. The 2301 attaches to a System/360 via a selector channel and an IBM 2820 Storage Control Unit, which can control up to four 2301 units.
IBM 2303
The IBM 2303 is a magnetic drum storage device with the same physical specifications as the IBM 2301. The difference is that the 2303 reads and writes one physical track at a time, rather than the four in the 2301, reducing the data transfer rate to 312,500 bytes per second. The 2303 attaches to System/360 through a channel and an IBM 2841 Storage Control Unit, which can attach up to two 2303 units.
See also
Drum memory: Drums used as main memory
References
History of computing hardware
7320
Computer storage devices | IBM drum storage | Technology | 824 |
1,368,015 | https://en.wikipedia.org/wiki/Collisional%20excitation | Collisional excitation is a process in which the kinetic energy of a collision partner is converted into the internal energy of a reactant species.
Astronomy
In astronomy, collisional excitation gives rise to spectral lines in the spectra of astronomical objects such as planetary nebulae and H II regions.
In these objects, most atoms are ionised by photons from hot stars embedded within the nebular gas, stripping away electrons. The emitted electrons, (called photoelectrons), may collide with atoms or ions within the gas, and excite them. When these excited atoms or ions revert to their ground state, they will emit a photon. The spectral lines formed by these photons are called collisionally excited lines (often abbreviated to CELs).
CELs are only seen in gases at very low densities (typically less than a few thousand particles per cm³) for forbidden transitions. For allowed transitions, the gas density can be substantially higher. At higher densities, the reverse process of collisional de-excitation suppresses the lines. Even the hardest vacuum produced on earth is still too dense for CELs to be observed. For this reason, when CELs were first observed by William Huggins in the spectrum of the Cat's Eye Nebula, he did not know what they were, and attributed them to a hypothetical new element called nebulium. However, the lines he observed were later found to be emitted by extremely rarefied oxygen.
CELs are very important in the study of gaseous nebulae, because they can be used to determine the density and temperature of the gas.
Mass spectrometry
Collisional excitation in mass spectrometry is the process where an ion collides with an atom or molecule and leads to an increase in the internal energy of the ion. Molecular ions are accelerated to high kinetic energy and then collide with neutral gas molecules (e.g. helium, nitrogen or argon). In the collision some of the kinetic energy is converted into internal energy which results in fragmentation in a process known as collision-induced dissociation.
See also
Collision-induced absorption and emission
References
Astronomical spectroscopy
Mass spectrometry | Collisional excitation | Physics,Chemistry | 446 |
41,055,311 | https://en.wikipedia.org/wiki/Lejeunea%20hodgsoniana | Lejeunea hodgsoniana is a species of liverwort in the family Lejeuneaceae. Endemic to New Zealand, it was first recognized in 1980 but not formally described until 2013. The plant forms bright green mats up to in diameter on tree bark and occasionally on rocks. The species is found from the Kermadec Islands in the north to the Chatham Islands in the south, primarily in coastal and lowland areas below elevation. It is distinguished from related species by its relatively large size, multi-celled tooth on the leaf lobule, and deeply divided underleaves with pointed tips. While showing a particular affinity for mahoe trees (Melicytus species), it grows on various native and introduced trees and is considered "Not Threatened" under the New Zealand Threat Classification System due to its abundance within its range and ability to grow in both pristine and disturbed habitats.
Taxonomy
Lejeunea hodgsoniana was first recognised as a distinct species in 1980 by the German bryologist Riclef Grolle, who annotated several specimens at the Museum of New Zealand Te Papa Tongarewa (WELT) with this name. However, Grolle never formally published a description of the species. Over the following decades, botanists informally referred to these plants using the provisional name "Lejeunea 'hodgsoniana' Grolle ined." The species was finally formally described in 2013 by Rodney J. Lewington, Peter Beveridge, and Matt A.M. Renner, who maintained Grolle's chosen species epithet. The species name hodgsoniana honours the New Zealand botanist Amy Hodgson (1888–1983), who made contributions to the study of New Zealand liverworts through her numerous publications between 1941 and 1972. While Hodgson never worked directly with the Lejeuneaceae, she published important studies on several other liverwort genera including Schistochila, Heteroscyphus, and Radula.
Lejeunea hodgsoniana is one of fourteen Lejeunea species known from New Zealand, seven of which are considered endemic to the region. This relatively high rate of endemism in New Zealand's Lejeunea species reflects a pattern also seen in related genera, suggesting the existence of a distinct southern-temperate Australasian element within the predominantly tropical Lejeuneaceae. The species is morphologically distinctive among Australasian Lejeunea species, though it shares some characteristics with two Asian species, L. bidentula and L. kodamae, particularly in having a multi-celled tooth on the leaf lobule. However, Lejeunea hodgsoniana can be distinguished from both these species by its larger size, differently shaped underleaves, and other structural details.
Description
Lejeunea hodgsoniana is a small liverwort that forms bright green, circular or extensive mats on tree bark and occasionally on rocks. These mats can reach up to in diameter, though they may become larger when different patches grow together. The plant becomes grey-green when dried and preserved in herbarium collections. The main stems are 1.0–1.5 millimeters (mm) wide and about 12 mm long, with frequent branching. The branches grow in a flattened pattern, creating thin, spread-out layers of growth. The stem is composed of an outer layer of 7 protective cells surrounding approximately 12 rows of smaller inner cells.
The leaves are arranged in an overlapping pattern along the stem. Each leaf has two parts: a larger upper and a smaller lower lobe (called a lobule). The upper lobes are broadly egg-shaped and lie flat, giving the plant its characteristic flattened appearance. These lobes measure 0.75–0.95 mm long and 0.55–0.65 mm wide on main stems, with smaller sizes on branches. A distinctive feature of this species is the small, triangular tooth-like projection on the lower lobe, which is made up of multiple cells – unusual among related species.
The underleaves (modified leaves on the lower surface of the stem) are spaced apart from each other and oval-shaped, with two long, narrow lobes that typically end in a single pointed cell. These underleaves are attached to the stem by three cells and often produce clusters of root-like structures called rhizoids that help anchor the plant.
When reproducing, L. hodgsoniana produces both male and female reproductive structures on the same plant (making it autoicous). The male structures occur in small clusters, while the female structures develop into a protective flask-shaped covering (perianth) that houses the developing spore capsule. The perianth has five ridges or keels, with the upper ridge being less prominent than the others. When mature, the spore capsule splits into four parts to release light brown, elliptical spores that measure 32.5–37.5 by 17.5–20 micrometres.
The species can be distinguished from similar liverworts by its relatively large size, the distinctive multi-celled tooth on its lower leaf lobes, and its deeply divided underleaves with pointed tips.
Distribution and habitat
Lejeunea hodgsoniana is found throughout New Zealand, with a range extending from the Kermadec Islands in the north (29°S) to Pitt Island in the Chatham Islands in the south (44°S). It primarily occurs in coastal and lowland areas, typically at elevations below , though it has been found as high as above sea level in the Kermadec Islands. The species is particularly well-represented in the northern half of New Zealand's North Island, where it occurs on numerous offshore islands including the Poor Knights Islands, islands of the Hauraki Gulf (such as Little Barrier Island), and the Mercury Islands. On the mainland, it has been recorded from North Cape southward to Port Waikato and Hamilton. In the southern North Island, it is mainly found in coastal areas around Wellington, including the eco-sanctuary Zealandia. The species has only one known location in the South Island, at the base of Farewell Spit.
Lejeunea hodgsoniana grows primarily on tree bark in coastal forests and scrubland. It shows a particular affinity for mahoe trees (Melicytus species), being found on Melicytus ramiflorus throughout most of its range, M. chathamicus in the Chatham Islands, and M. aff. ramiflorus in the Kermadec Islands. However, it has been recorded growing on a wide variety of other native and introduced trees, including pūriri (Vitex lucens), karaka (Corynocarpus laevigatus), and even apple trees (Malus domestica).
While primarily found on tree bark, the species occasionally grows on shaded rocks, particularly in stream beds. It has been found on various rock types including serpentinite, basalt, and basaltic andesite. The species often grows alongside other small liverworts and mosses, forming mixed communities of bryophytes.
Conservation
Despite its relatively restricted geographic range, L. hodgsoniana is considered "Not Threatened" under the New Zealand Threat Classification System. This classification reflects its abundance within its range and its ability to grow in both pristine and disturbed habitats, including forest edges, riparian vegetation, successional forest, and floodplain scrub. The species can be found in both highly modified remnant forests and undisturbed stands of native vegetation.
References
Lejeuneaceae
Endemic flora of New Zealand
Plants described in 2013
Terrestrial biota of New Zealand
Epiphytes
Lithophytes | Lejeunea hodgsoniana | Biology | 1,555 |
53,505,342 | https://en.wikipedia.org/wiki/Victor%20Mbarika | Victor Mbarika is an American professor from Cameroon. He is currently the Stallings International Distinguished Scholar and MIS professor at East Carolina University within the University of North Carolina System, in Greenville, North Carolina, United States. He is the President, Board of Trustees of the ICT University.
Education
Mbarika earned his Master's degree in Management Information Systems (MIS) from The University of Illinois at Chicago in 1997, and a Ph.D. degree in MIS from Auburn University in 2000.
Career and research
Mbarika's research is focused on ICT implementation in Africa, and has provided a theoretically informed framework for understanding ICTs in less developed countries. His work provides a base from which to begin to understand the contextual differences that dictate information systems research in less advantaged environments. He is founding editor-in-chief of The African Journal of Information Systems and senior board member for several academic journals internationally.
He is also the founder of the International Center for Information Technology and Development (ICITD), East Carolina University, Greenville, which focuses on advancing IT training and development in Sub Saharan Africa especially on e-health, e-education and e-democracy. In 2016, he was among the first recipients of the Fulbright-MCMC research grants.
In 2020, Premium Times reported that a spokesperson for Abdullahi Umar Ganduje, governor of Kano State of Nigeria, claimed that Ganduje had received a letter from Mbarika indicating Ganduje's appointment as a visiting professor at East Carolina University. East Carolina University later released a statement that confirmed Mbarika was part of its faculty but denied that the appointment was made and that Mbarika was authorised to make such an appointment at the university.
Other initiatives
Other initiatives facilitated by him include The ICT for Africa conference series (ICT4 Africa), African Journal of Information Systems (AJIS), the ICT University Foundation and Cameroon Youths for Jesus (CYJ). Through the ICT University Foundation, he has donated e-learning facilities to some universities in sub-Saharan Africa which is aimed to advance learning activities for economic and societal development.
Publications
Professor Mbarika has authored over 200 academic publications (books, book chapters, journals articles).
Selected books
Ayo, C. K. and Mbarika, V. (Eds.). (2017). Sustainable ICT Adoption and Integration for Socio-Economic Development. IGI Global, Hershey, Pennsylvania, USA.
Mbarika, V. and Adebayo, A. P. (2015). Information and Communication Technology for Secondary Schools. AGWECAMS Publishers.
Kituyi G., Moya, M. and Mbarika, V. (2013). Computerized Accounting and Finance: Applications in Business. Makerere University Business School.
Hinson, R., Boateng, R. and Mbarika, V. (Eds.). (2009). Electronic Commerce and Customer Management in Ghana. Accra, Ghana: Pro Write Publishing.
Kizza, J.M., Muhirwe, J., Aisbett, J., Getao, K., Mbarika, V., Patel, D. and Rodrigues, A. J. (Eds.). (2007). Strengthening the Role of ICT in Development. Fountain Publishers: Kampala, Uganda.
Sankar, C.S., Mbarika, V., & Raju, P.K. (2006). Use of Information Technologies in Businesses and Society: Learning Through Real-World Case Studies. Anderson, SC: Tavenner Publishers.
Raju, P.K., Sankar, C. & Mbarika, V. (2005). POWERTEL Case Study: Coverage of a Larger Area versus Better Frequency Re-Use in Wireless Communications. Anderson, SC: Tavenner Publishers.
Mbarika, V. (2001). Africa’s Least Developed Countries’ Teledensity Problems and Strategies. Yaounde, Cameroon: ME & Agwecam Publishers.
Honors and awards
Mbarika is a recipient of three lifetime achievement awards, for his "Outstanding contribution to computer science and telecommunications" and his "Contribution to ICT Research and Education".
He receipts African Achievers Award on July 14, 2023 in London, United Kingdom.
References
External links
ICITD
American scientists
20th-century American scientists
Auburn University alumni
Cameroonian scientists
Information systems researchers
Living people
Year of birth missing (living people) | Victor Mbarika | Technology | 897 |
17,000,438 | https://en.wikipedia.org/wiki/Font%20embedding | Font embedding is the inclusion of font files inside an electronic document for display across different platforms. Font embedding is controversial because it allows licensed fonts to be freely distributed.
History
Font embedding has been possible with Portable Document Format (PDF), Microsoft Word for Windows and some other applications for many years. LibreOffice supports font embedding since version 4.1 in its Writer, Calc and Impress applications.
In word processors
Microsoft Word for Windows has permitted font embedding in some document formats since Word 97 (such as .doc or .rtf). But this feature does not work correctly in some Word versions.
LibreOffice supports font embedding since version 4.1. This feature is available for LibreOffice Writer, the spreadsheet application LibreOffice Calc, and the presentation application LibreOffice Impress.
Both OpenOffice.org and LibreOffice support font embedding in the PDF export feature.
Font embedding in word processors is not widely supported nor interoperable. For example, if a .rtf file made in Microsoft Word is opened in LibreOffice Writer, it will usually remove the embedded fonts.
On the Web
Browsers Internet Explorer, Firefox, Safari, Opera and Google Chrome support automatic downloading of fonts used on a website using CSS2 or CSS3.
Controversy
Font embedding is a controversial practice because it allows copyrighted fonts to be freely distributed. The controversy can be mitigated by only embedding the characters required to view the document (subsetting). This reduces file size but prohibits adding previously unused characters to the document.
Because of the potential for copyright infringement, Microsoft Internet Explorer only permits embedded fonts that include digital rights management (DRM) protections. The Acid3 test requires font embedding with minimal DRM protections.
See also
ODTTF as used in Microsoft's XML Paper Specification
References
Digital typography | Font embedding | Technology | 402 |
32,321 | https://en.wikipedia.org/wiki/Universal%20Networking%20Language | Universal Networking Language (UNL) is a declarative formal language specifically designed to represent semantic data extracted from natural language texts. It can be used as a pivot language in interlingual machine translation systems or as a knowledge representation language in information retrieval applications.
Structure
In UNL, the information conveyed by the natural language is represented sentence by sentence as a hypergraph composed of a set of directed binary labeled links between nodes or hypernodes. As an example, the English sentence "The sky was blue?!" can be represented in UNL as follows:
In the example above, sky(icl>natural world) and blue(icl>color), which represent individual concepts, are UW's attributes of an object directed to linking the semantic relation between the two UWs; "@def", "@interrogative", "@past", "@exclamation" and "@entry" are attributes modifying UWs.
UWs are expressed in natural language to be humanly readable. They consist of a "headword" (the UW root) and a "constraint list" (the UW suffix between parentheses), where the constraints are used to disambiguate the general concept conveyed by the headword. The set of UWs is organized in the UNL Ontology.
Relations are intended to represent semantic links between words in every existing language. They can be ontological (such as "icl" and "iof"), logical (such as "and" and "or"), or thematic (such as "agt" = agent, "ins" = instrument, "tim" = time, "plc" = place, etc.). There are currently 46 relations in the UNL Specs that jointly define the UNL syntax.
Within the UNL program, the process of representing natural language sentences in UNL graphs is called UNLization, and the process of generating natural language sentences out of UNL graphs is called NLization. UNLization is intended to be carried out semi-automatically (i.e., by humans with computer aids), and NLization is intended to be carried out automatically.
History
The UNL program started in 1996 as an initiative of the Institute of Advanced Studies (IAS) of the United Nations University (UNU) in Tokyo, Japan. In January 2001, the United Nations University set up an autonomous and non-profit organization, the UNDL Foundation, to be responsible for the development and management of the UNL program. It inherited from the UNU/IAS the mandate of implementing the UNL program.
The overall architecture of the UNL System has been developed with a set of basic software and tools.
It was recognized by the Patent Cooperation Treaty (PCT) for the "industrial applicability" of the UNL, which was obtained in May 2002 through the World Intellectual Property Organization (WIPO); the UNL acquired the US patents 6,704,700 and 7,107,206.
See also
Semantic network
Abstract semantic graph
Semantic translation
Semantic unification
Abstract Meaning Representation
External links
UNLweb, the UNLweb portal
UNDL Foundation where UNL development is coordinated.
Online book on UNL
UNL system description
UNL Society
UNL in Bangladesh
UNL in Brazil
UNL in Egypt
UNL in France
UNL in Germany
UNL in India
UNL in Italy
UNL in Japan
UNL in Jordan
UNL in Latvia
UNL in Russia, Russian⇔UNL⇔English converter
UNL in Spain
UNL in Thailand
Knowledge representation languages
Computational linguistics
Machine translation
Translation | Universal Networking Language | Technology | 735 |
27,310,834 | https://en.wikipedia.org/wiki/Shear%20wave%20splitting | Shear wave splitting, also called seismic birefringence, is the phenomenon that occurs when a polarized shear wave enters an anisotropic medium (Fig. 1). The incident shear wave splits into two polarized shear waves (Fig. 2). Shear wave splitting is typically used as a tool for testing the anisotropy of an area of interest. These measurements reflect the degree of anisotropy and lead to a better understanding of the area's crack density and orientation or crystal alignment.
We can think of the anisotropy of a particular area as a black box and the shear wave splitting measurements as a way of looking at what is in the box.
Introduction
An incident shear wave may enter an anisotropic medium from an isotropic media by encountering a change in the preferred orientation or character of the medium. When a polarized shear wave enters a new, anisotropic medium, it splits into two shear waves (Fig.2).
One of these shear waves will be faster than the other and oriented parallel to the cracks or crystals in the medium. The second wave will be slower than the first and sometimes orthogonal to both the first shear wave and the cracks or crystals in the media. The time delays observed between the slow and fast shear waves give information about the density of cracks in the medium. The orientation of the fast shear wave records the direction of the cracks in the medium.
When plotted using polarization diagrams, the arrival of split shear waves can be identified by the abrupt changes in direction of the particle motion (Fig.3).
In a homogeneous material that is weakly anisotropic, the incident shear wave will split into two quasi-shear waves with approximately orthogonal polarizations that reach the receiver at approximately the same time. In the deeper crust and upper mantle, the high frequency shear waves split completely into two separate shear waves with different polarizations and a time delay between them that may be up to a few seconds.
History
Hess (1964) made the first measurements of P wave azimuthal velocity variations in oceanic basins. This area was chosen for this study because oceanic basins are made of large, relatively uniform homogeneous rocks. Hess observed, from previous seismic velocity experiments with olivine crystals, that if the crystals had even a slight statistical orientation this would be extremely evident in the seismic velocities recorded using seismic refraction. This concept was tested using seismic refraction profiles from the Mendocino fracture zone. Hess found that the slow compressional waves propagated perpendicular to the plane of slip and the higher velocity component was parallel to it. He inferred that the structure of oceanic basins could be recorded quickly and understood better if these techniques were used.
Ando (1980) focused on identifying shear-wave anisotropy in the upper mantle. This study focused on shear wave splitting recorded near the Chubu Volcanic Area in Japan. Using newly implemented telemetric seismographic stations, they were able to record both P wave and S wave arrivals from earthquakes up to 260 km beneath the volcanic area. The depths of these earthquakes make this area ideal for studying the structure of the upper mantle. They noted the arrivals of two distinct shear waves with different polarizations (N-S, fast and E-W, slow) approximately 0.7 seconds apart. It was concluded that the splitting was not caused by the earthquake source but by the travel path of the waves on the way to the seismometers. Data from other nearby stations were used to constrain the source of the seismic anisotropy. He found the anisotropy to be consistent with the area directly below the volcanic area and was hypothesized to occur due to oriented crystals in a deep rooted magma chamber. If the magma chamber contained elliptical inclusions oriented approximately N-S, then the maximum velocity direction would also be N-S, accounting for the presence of seismic birefringence.
Crampin (1980) proposed the theory of earthquake prediction using shear wave splitting measurements. This theory is based on the fact that microcracks between the grains or crystals in rocks will open wider than normal at high stress levels. After the stress subsides, the microcracks will return to their original positions. This phenomenon of cracks opening and closing in response to changing stress conditions is called dilatancy. Because shear wave splitting signatures are dependent on both the orientation of the microcracks (perpendicular to the dominant stress direction) and the abundance of cracks, the signature will change over time to reflect the stress changes in the area. Once the signatures for an area are recognized, they may then be applied to predict nearby earthquakes with the same signatures.
Crampin (1981) first acknowledged the phenomenon of azimuthally-aligned shear wave splitting in the crust. He reviewed the current theory, updated equations to better understand shear-wave splitting, and presented a few new concepts. Crampin established that the solution to most anisotropic problems can be developed. If a corresponding solution for an isotropic case can be formulated, then the anisotropic case can be arrived at with more calculations. The correct identification of body and surface wave polarizations is the key to determining the degree of anisotropy. The modeling of many two-phase materials can be simplified by the use of anisotropic elastic-constants. These constants can be found by looking at recorded data. This has been observed in several areas worldwide.
Physical mechanism
The difference in the travel velocities of the two shear waves can be explained by comparing their polarizations with the dominant direction of anisotropy in the area. The interactions between the tiny particles that make up solids and liquids can be used as an analogue for the way a wave travels through a medium. Solids have very tightly bound particles that transmit energy very quickly and efficiently. In a liquid, the particles are much less tightly bound and it generally takes a longer time for the energy to be transmitted. This is because the particles have further to travel to transfer the energy from one to another. If a shear wave is polarized parallel to the cracks in this anisotropic medium, then it may look like the dark blue wave in Figure 4. This wave is acting on the particles like energy being transferred through a solid. It will have a high velocity because of the proximity of the grains to each other. If there is a shear wave that is polarized perpendicular to the liquid-filled cracks or elongated olivine crystals present in the medium, then it would act upon these particles like those that make up a liquid or gas. The energy would be transferred more slowly through the medium and the velocity would be slower than the first shear wave.
The time delay between the shear wave arrivals depends on several factors including the degree of anisotropy and the distance the waves travel to the recording station. Media with wider, larger cracks will have a longer time delay than a media with small or even closed cracks. Shear wave splitting will continue to occur until the shear-wave velocity anisotropy reaches about 5.5%.
Mathematical explanation
Mathematical Explanation (Ray theory)
The equation of motion in rectangular Cartesian coordinates can be written as
where t is the time, is the density, is the component of the displacement vector U, and represents the elastic tensor.
A wavefront can be described by the equation
The solution to () can be expressed as a ray series
where the function satisfies the relation
Substitute () into (),
where the vector operators N,M,L are given by the formula:
where
For the first order , so , and only the first component of the equation () is left.
Thus,
To obtain the solution of (), the eigenvalues and eigenvectors of matrix are needed,
which can be rewritten as
where the values and are the invariants of the symmetric matrix .
The matrix has three eigenvectors: , which correspond to three eigenvalues of
and .
For isotropic media, corresponds to the compressional wave and corresponds to the two shear waves traveling together.
For anisotropic media,, indicates that the two shear waves have split.
Measurement of shear wave splitting parameters
Modeling
In an isotropic homogeneous medium, the shear wave function can be written as
where A is the complex amplitude, is the wavelet function (the result of the Fourier transformed source time function), and is a real unit vector pointing in the displacement direction and contained in the plane orthogonal to the propagation direction.
The process of shear wave splitting can be represented as the application of the splitting operator to the shear wave function.
where and are eigenvectors of the polarization matrix with eigenvalues corresponding to the two shear wave velocities.
The resulting split waveform is
Where is the time delay between the slow and fast shear waves and is the angle between the polarization of the incident shear wave and the polarization of the fast shear wave . These two parameters can be individually estimated from multiple component seismic recordings (Fig. 5).
Schematic model
Figure 6 is a schematic animation showing the process of shear wave splitting and the seismic signature generated by the arrivals of two polarized shear waves at the surface recording station. There is one incident shear wave (blue) traveling vertically along the center grey axis through an isotropic medium (green). This single incident shear wave splits into two shear waves (orange and purple) upon entering the anisotropic media (red). The faster shear wave is oriented parallel to the cracks or crystals in the medium. The arrivals of the shear waves are shown on the right, as they appear at the recording station. The north–south polarized shear wave arrives first (purple) and the east–west polarized shear wave (orange) arrives about a second later.
Applications, justification, usefulness
Shear wave splitting measurements have been used to explore earthquake prediction, and to map fracture networks created by high pressure fracturing of reservoirs.
According to Crampin shear wave splitting measurements can be used to monitor stress levels in the earth. It is well known that rocks near an earthquake-prone zone will exhibit dilatancy. Shear wave splitting is produced by seismic waves traveling through a medium with oriented cracks or crystals. The changes in shear wave splitting measurements over the time leading up to an impending earthquake can be studied to give insight to the timing and location of the earthquake. These phenomena may be observed many hundreds of kilometers from the epicenter.
The petroleum industry uses shear-wave splitting measurements to map the fractures throughout a hydrocarbon reservoir. To date, this is the best method to gain in situ information about the fracture network present in a hydrocarbon reservoir. The best production in a field is associated with an area where there are multiple small fractures that are open, allowing for constant flow of the hydrocarbons. Shear-wave splitting measurements are recorded and analyzed to obtain the degree of anisotropy throughout the reservoir. The area with the largest degree of anisotropy will generally be the best place to drill because it will contain the largest number of open fractures.
Case examples
A successfully stress-forecast earthquake in Iceland
On October 27, 1998, during a four-year study of shear wave splitting in Iceland, Crampin and his coworkers recognized that time delays between split shear-waves were increasing at two seismic recording stations, BJA and SAU, in southwest Iceland. The following factors lead the group to recognize this as a possible precursor to an earthquake:
The increase persisted for nearly 4 months.
It had approximately the same duration and slope as a previously recorded magnitude 5.1 earthquake in Iceland.
The time delay increase at station BJA started at about and escalated to approximately .
was the inferred level of fracture for the previous earthquake.
These features suggested that the crust was approaching fracture criticality and that an earthquake was likely to occur in the near future.
Based on this information, an alert was sent to the Iceland Meteorological Office (IMO) on October 27 and 29, warning of an approaching earthquake. On November 10, they sent another email specifying that an earthquake was likely to occur within the next 5 months. Three days later, on November 13, IMO reported a magnitude 5 earthquake near the BJA station. Crampin et al. suggests that this is the first scientifically, as opposed to precursory or statistically, predicted earthquake. They proved that variations of shear-wave splitting can be used to forecast earthquakes.
This technique was not successful again until 2008 due to the lack of appropriate source-geophone-earthquake geometry needed to evaluate changes in shear wave splitting signatures and time delays.
Temporal changes before volcanic eruptions
Volti and Crampin observed temporal increases in Band-1 time-delays for 5 months at approximately 240 kilometer depth in directions N,SW and W,SW before the 1996 Gjalp Eruption in Vatnajökull Icefield. This was the largest eruption in Iceland in several decades.
The pattern of increasing shear wave splitting time-delays is typical of the increase now seen before many earthquakes in Iceland and elsewhere. The time delays just before earthquakes characteristically decrease immediately following the eruption because the majority of the stress is released at that one time. The increase in normalized time-delays in volcanic eruptions does not decrease at the time of the eruption but gradually declines at about over several. This decrease is approximately linear and there appeared to be no other significant magmatic disturbances during the period following the eruption.
More observations are needed to confirm whether the increase and decrease time delay pattern is universal for all volcanic eruptions or if each area is different. It is possible that different types of eruptions show different shear wave splitting behaviors.
Fluid-injection in Petroleum Engineering
Bokelmann and Harjes reported the effects on the shear waves of fluid injection at about 9 kilometer depth in the German Continental Deep Drilling Program (KTB) deep drilling site in southeast Germany. They observed shear-wave splitting from injection-induced events at a pilot well offset 190 meters form the KTB well. A borehole recorder at a depth of 4,000 meters was used to record the splitting measurements.
They found:
Temporal variations in shear-wave splitting as a direct result of injection-induced events.
That the initial ~1% shear wave splitting decreases by 2.5% in the next 12 hours following the injection.
The largest decrease occurred within two hours after the injection.
The splitting time to be very stable after the injection ceased.
No direct interpretation of the decrease is proposed but it is suggested that the decrease is associated with stress release by the induced events.
Limitations
Shear-wave splitting measurements can provide the most accurate and in depth information about a particular region. However, there are limits that need to be accounted for when recording or analyzing shear wave splitting measurements. These include the sensitive nature of shear waves, that shear wave splitting varies with incidence and azimuth, and that shear waves may split multiple times throughout an anisotropic medium, possibly every time the orientation changes.
Shear wave splitting is very sensitive to fine changes in the pore pressure in the Earth's crust. In order to successfully detect the degree of anisotropy in a region there must be more several arrivals that are well distributed in time. Too few events cannot detect the change even if they are from similar waveforms. The
Shear wave splitting varies with both incidence angle and propagation azimuth. Unless this data is viewed in polar projection, the 3-D nature is not reflected and may be misleading.
Shear wave splitting may be caused by more than just one layer that is anisotropic and located anywhere between the source and the receiver station. The shear wave splitting measurements have extensive lateral resolution but very poor vertical resolution. The polarizations of shear waves vary throughout the rock mass. Therefore, the observed polarizations may be those of the near surface structure and are not necessarily representative of the structure of interest.
Common misunderstandings
Due to the nature of split shear waves, when they are recorded in typical three-component seismograms, they write very complicated signatures. Polarizations and time delays are heavily scattered and vary greatly both in time and space. Because of the variation in signature, it is easy to misinterpret the arrivals and polarization of incoming shear waves. Below is an explanation of a few of the common misunderstandings associated with shear waves, further information can be found in Crampin and Peacock (2008).
Polarizations of split shear waves are orthogonal.
Shear waves that propagate along the ray path at a group velocity have polarizations that are only orthogonal in a few specific directions. Polarizations of body waves are orthogonal in all phase velocity directions, however this type of propagation is generally very difficult to observe or record.
Polarizations of split shear-waves are fixed, parallel to cracks, or normal to spreading centers.
Even when propagating through parallel cracks or perpendicular to spreading centers or parallel to cracks, the polarizations of shear waves will always vary in three dimensions with incidence and azimuth within the shear wave window.
Crack anisotropy always decreases with depth as fluid filled cracks are closed by lithostatic pressure.
This statement only holds true if the fluid in the cracks is somehow removed. This may be accomplished via chemical absorption, drainage, or flow to the surface. However, these occur in relatively rare instances and there is evidence that supports the presence of fluids at depth. This includes data from the Kola deep well and the presence of high conductivity in the lower crust.
Signal-to-noise ratios of shear-wave splitting above small earthquakes can be improved by stacking.
Stacking seismic data from a reflection survey is useful because it was collected with a predictable, controlled source. When the source is uncontrolled and unpredictable, stacking the data only degrades the signal. Because recorded shear wave time delays and polarizations vary in their incidence angle and azimuth of radio propagation, stacking these arrivals will degrade the signal and decrease the signal to noise ratio, resulting in a plot that is noisy and hard to interpret at best.
Future trends
Our understanding of shear wave splitting and how to best use the measurements is constantly improving. As our knowledge improves in this area, there will invariably be better ways of recording and interpreting these measurements and more opportunities to use the data. Currently, it is being developed for use in the petroleum industry and for predicting earthquakes and volcanic eruptions.
Shear wave splitting measurements have been used successfully to predict several earthquakes. With better equipment and more densely spaced recording stations, we have been able to study the signature variations of shear wave splitting over earthquakes in different regions. These signatures change over time to reflect the amount of stress present in an area. After several earthquakes have been recorded and studied, the signatures of shear wave splitting just before an earthquake occurs become well known and this can be used to predict future events. This same phenomenon can be seen before a volcanic eruption and it is inferred that they may be predicted in the same manner.
The petroleum industry has been using shear wave splitting measurements recorded above hydrocarbon reservoirs to gain invaluable information about the reservoir for years. Equipment is constantly being updated to reveal new images and more information.
References
Further reading
External links
Alfred Wegener Institute for polar and Marine Research(AWI)(Germany)
Shear-wave splitting in Matlab(France)
A lot of interesting seismic images(ASU)
Information on Solids, Liquids, and Gasses
Shear-Wave-Splitting-in-Anisotropic-Media Collection (Goethe University Frankfurt, Germany)
MATLAB Code for demonstration
You can download a MATLAB code and create a demonstration movie by yourself here on MathWorks website.
Figure 7 is a screen shot of the Matlab Demo output.
Wave mechanics
Seismology
Polarization (waves) | Shear wave splitting | Physics | 4,034 |
16,020,896 | https://en.wikipedia.org/wiki/HD%2029697 | HD 29697 (Gliese 174, V834 Tauri) is a variable star of BY Draconis type in the constellation Taurus. It has an apparent magnitude around 8 and is approximately 43 ly away.
Description
HD 29697 is the Henry Draper Catalogue number of this star. It is also known by its designation in the Gliese Catalogue of Nearby Stars, Gliese 174, and its variable star designation V834 Tauri.
V834 Tauri is a BY Draconis variable with maximum and minimum apparent magnitudes of 7.94 and 8.33 respectively, so it is never visible to the naked eye.
The star has been examined for indications of a circumstellar disk using the Spitzer Space Telescope, but no statistically-significant infrared excess was detected.
References
BY Draconis variables
Taurus (constellation)
Tauri, V834
029697
021818
Gliese and GJ objects
Durchmusterung objects
K-type main-sequence stars | HD 29697 | Astronomy | 210 |
28,120,225 | https://en.wikipedia.org/wiki/University%20of%20Minnesota%20Supercomputing%20Institute | The Minnesota Supercomputing Institute (MSI) in Minneapolis, Minnesota is a core research facility of the University of Minnesota that provides hardware and software resources, as well as technical user support, to faculty and researchers at the university and at other institutions of higher education in Minnesota. MSI is located in Walter Library, on the university's Twin Cities campus.
History
In 1981, the University of Minnesota became the first U.S. university to acquire a supercomputer, a Cray-1. The Minnesota Supercomputing Institute was created in 1984 to provide high-performance computing resources to the University of Minnesota's research community. MSI currently has one HPC cluster, Agate, available for use.
MSI is part of Research Computing in the Research and Innovation Office. Research Computing is an umbrella organization that comprises the Minnesota Supercomputing Institute, U-Spatial, the Data Science Initiative, and the International Institute for Biosensing.
Memberships
MSI is a member of the Coalition for Academic Scientific Computation, the Minnesota High Tech Association, the Great Lakes Consortium, and the Extreme Science and Engineering Discovery Environment (XSEDE).
Supercomputing capabilities
HPC resources
Agate - HPE cluster with HPE and AMD CPU nodes and NVidia GPU nodes
References
Moore, Rick. "Blade Runner : UMNews." University of Minnesota. Web. 29 July 2010. http://www1.umn.edu/news/features/2009/UR_CONTENT_148391.html
Vance, Ashlee. "Minnesota’s Enormous Apples Computer - Bits Blog - NYTimes.com." Technology - Bits Blog - NYTimes.com. Web. 29 July 2010. http://bits.blogs.nytimes.com/2009/12/10/minnesotas-enormous-apples-computer/?smid=pl-share
University of Minnesota
Supercomputers
University and college laboratories in the United States
Computer science institutes in the United States
Research institutes in Minnesota | University of Minnesota Supercomputing Institute | Technology | 420 |
32,086,877 | https://en.wikipedia.org/wiki/Novel%20ecosystem | Novel ecosystems are human-built, modified, or engineered niches of the Anthropocene. They exist in places that have been altered in structure and function by human agency. Novel ecosystems are part of the human environment and niche (including urban, suburban, and rural), they lack natural analogs, and they have extended an influence that has converted more than three-quarters of wild Earth . These anthropogenic biomes include technoecosystems that are fuelled by powerful energy sources (fossil and nuclear) including ecosystems populated with technodiversity, such as roads and unique combinations of soils called technosols. Vegetation associations on old buildings or along field boundary stone walls in old agricultural landscapes are examples of sites where research into novel ecosystem ecology is developing.
Overview
Human society has transformed the planet to such an extent that we may have ushered in a new epoch known as the anthropocene. The ecological niche of the anthropocene contains entirely novel ecosystems that include technosols, technodiversity, anthromes, and the technosphere. These terms describe the human ecological phenomena marking this unique turn in the evolution of Earth's history. The total human ecosystem (or anthrome) describes the relationship of the industrial technosphere to the ecosphere.
Technoecosystems interface with natural life-supporting ecosystems in competitive and parasitic ways. Odum (2001) attributes this term to a 1982 publication by Zev Naveh: "Current urban-industrial society not only impacts natural life-support ecosystems, but also has created entirely new arrangements that we can call techno-ecosystems, a term believed to be first suggested by Zev Neveh (1982). These new systems involve new, powerful energy sources (fossil and atomic fuels), technology, money, and cities that have little or no parallels in nature." The term technoecosystem, however, appears earliest in print in a 1976 technical report and also appears in a book chapter (see in Lamberton and Thomas (1982) written by Kenneth E. Boulding).
Novel Ecosystems
Novel ecosystems "differ in composition and/or function from present and past systems". Novel ecosystems are the hallmark of the recently proposed anthropocene epoch. They have no natural analogs due to human alterations on global climate systems, invasive species, a global mass extinction, and disruption of the global nitrogen cycle. Novel ecosystems are creating many different kinds of dilemmas for terrestrial and marine conservation biologists. On a more local scale, abandoned lots, agricultural land, old buildings, field boundary stone walls or residential gardens provide study sites on the history and dynamics of ecology in novel ecosystems.
Anthropogenic biomes
Ellis (2008) identifies twenty-one different kinds of anthropogenic biomes that sort into the following groups: 1) dense settlements, 2) villages, 3) croplands, 4) rangeland, 5) forested, and 6) wildlands. These anthropogenic biomes (or anthromes for short) create the technosphere that surrounds us and are populated with diverse technologies (or technodiversity for short). Within these anthromes the human species (one species out of billions) appropriates 23.8% of the global net primary production. "This is a remarkable impact on the biosphere caused by just one species."
Noosphere
Noosphere (sometimes noösphere) is the "sphere of human thought". The word is derived from the Greek νοῦς (nous "mind") + σφαῖρα (sphaira "sphere"), in lexical analogy to "atmosphere" and "biosphere". Introduced by Pierre Teilhard de Chardin 1922 in his Cosmogenesis. Another possibility is the first use of the term by Édouard Le Roy, who together with Chardin was listening to lectures of Vladimir Vernadsky at Sorbonne. In 1936 Vernadsky presented on the idea of the Noosphere in a letter to Boris Leonidovich Lichkov (though, he states that the concept derives from Le Roy).
Technosphere
The technosphere is the part of the environment on Earth where technodiversity extends its influence into the biosphere. "For the development of suitable restoration strategies, a clear distinction has to be made between different functional classes of natural and cultural solar-powered biosphere and fossil-powered technosphere landscapes, according to their inputs and throughputs of energy and materials, their organisms, their control by natural or human information, their internal self-organization and their regenerative capacities." The weight of Earth's technosphere has been suggested to be 30 trillion tons, a mass greater than 50 kilos for every square metre of the planet's surface.
Technoecosystems
The concept of technoecosystems has been pioneered by ecologists Howard T. Odum and Zev Naveh. Technoecosystems interfere with and compete against natural systems. They have advanced technology (or technodiversity) money-based market economies and have a large ecological footprints. Technoecosystems have far greater energy requirements than natural ecosystems, excessive water consumption, and release toxic and eutrophicating chemicals. Other ecologists have defined the extensive global network of road systems as a type of technoecosystem.
Technoecotypes
"Bio-agro- and techno-ecotopes are spatially integrated in larger, regional landscape units, but they are not structurally and functionally integrated in the ecosphere. Because of the adverse impacts of the latter and the great human pressures on bio-ecotopes, they are even antagonistically related and therefore cannot function together as a coherent, sustainable ecological system."
Technosols
Technosols are a new form of ground group in the World Reference Base for Soil Resources (WRB). Technosols are "mainly characterised by anthropogenic parent material of organic and mineral nature and which origin can be either natural or technogenic."
Technodiversity
Technodiversity refers to the varied diversity of technological artifacts that exist in technoecosystems.
References
Ecology
Systems ecology
Ecosystems | Novel ecosystem | Biology,Environmental_science | 1,263 |
7,272,384 | https://en.wikipedia.org/wiki/Neocallimastigomycota | Neocallimastigomycota is a phylum containing anaerobic fungi, which are symbionts found in the digestive tracts of larger herbivores. Anaerobic fungi were originally placed within phylum Chytridiomycota, within Order Neocallimastigales but later raised to phylum level, a decision upheld by later phylogenetic reconstructions. It encompasses only one family.
Discovery
The fungi in Neocallimastigomycota were first recognised as fungi by Orpin in 1975, based on motile cells present in the rumen of sheep. Their zoospores had been observed much earlier but were believed to be flagellate protists, but Orpin demonstrated that they possessed a chitin cell wall. It has since been shown that they are fungi related to the core chytrids. Prior to this, the microbial population of the rumen was believed to consist only of bacteria and protozoa. Since their discovery they have been isolated from the digestive tracts of over 50 herbivores, including ruminant and non-ruminant (hindgut-fermenting) mammals and herbivorous reptiles.
Neocallimastigomycota have also been found in humans.
Circumscription
Reproduction and growth
These fungi reproduce in the rumen of ruminants through the formation of zoospores which are released from sporangia. These zoospores bear a kinetosome but lack the nonflagellated centriole known in most chytrids, and have been known to utilize horizontal gene transfer in their development of xylanase (from bacteria) and other glucanases.
The nuclear envelopes of their cells are notable for remaining intact throughout mitosis. Sexual reproduction has not been observed in anaerobic fungi. However, they are known to be able to survive for many months in aerobic environments, a factor which is important in the colonisation of new hosts. In Anaeromyces, the presence of putative resting spores has been observed but the way in which these are formed and germinate remains unknown.
Metabolism
Neocallimastigomycota lack mitochondria but instead contain hydrogenosomes in which the oxidation of NADH to NAD+, leading to formation of H2.
Polysaccharide-degrading activity
Neocallimastigomycota play an essential role in fibre-digestion in their host species. They are present in large numbers in the digestive tracts of animals which are fed on high fibre diets. The polysaccharide degrading enzymes produced by anaerobic fungi can hydrolyse the most recalcitrant plant polymers and can degrade unlignified plant cell walls entirely. Orpinomyces sp. exhibited the capacity of xylanase, CMCase, lichenase, amylase, β-xylosidase, β-glucosidase, α-Larabinofuranosidase and minor amounts of β-cellobiosidase production by utilizing avicel as the sole energy source. The polysaccharide degrading enzymes are organised into a multiprotein complex, similar to the bacterial cellulosome.
Spelling of name
The Greek termination, "-mastix", referring to "whips", i.e. the many flagella on these fungi, is changed to "-mastig-" when combined with additional terminations in Latinized names. The family name Neocallimastigaceae was originally incorrectly published as "Neocallimasticaceae" by the publishing authors which led to the coinage of the misspelled, hence incorrect "Neocallimasticales", an easily forgiven error considering that other "-ix" endings such as Salix goes to Salicaceae. Correction of these names is mandated by the International Code of Botanical Nomenclature, Art. 60. The corrected spelling is used by Index Fungorum. Both spellings occur in the literature and on the WWW as a result of the spelling in the original publication.
References
External links
The Anaerobic Fungi Network
Fungus phyla
Fungi by classification | Neocallimastigomycota | Biology | 860 |
60,806,961 | https://en.wikipedia.org/wiki/ThinkBook | ThinkBook is a line of business-oriented laptop computers and tablets designed, developed and marketed by Lenovo aimed at small businesses.
The ThinkBook line is marketed towards small business users and gets the same market position as Lenovo's ThinkPad E series. The ThinkBook does not have a TrackPoint, physical touchpad buttons, and has a simplified keyboard layout. However, the ThinkBook has an aluminum case (instead of a plastic Thinkpad E case).
13s and 14s
The first product lineup launched in 2019 with the ThinkBook 13s and 14s. Both laptops include TPM 2.0 security chips, fingerprint readers, webcam shutters similar to those on ThinkPads, and dedicated buttons for Skype. They support 8th Generation Intel Core processors, AMD Radeon 540X graphics, M.2 SSD storage, USB-C Docks, and run Windows 10 Pro. The ThinkBook 13s has a 13-inch screen and the 14s has a 14-inch screen.
See also
Lenovo IdeaPad
IBM/Lenovo ThinkCentre
IBM/Lenovo ThinkPad
HP ProBook
Dell Vostro
References
External links
Official Lenovo ThinkBook website
Think
Consumer electronics brands
Computer-related introductions in 2019
Business laptops | ThinkBook | Technology | 258 |
66,662,738 | https://en.wikipedia.org/wiki/Gloeophyllum%20protractum | Gloeophyllum protractum is a species of fungus belonging to the family Gloeophyllaceae.
It is native to Eurasia and Northern America.
References
Gloeophyllales
Fungus species | Gloeophyllum protractum | Biology | 44 |
545,904 | https://en.wikipedia.org/wiki/Alnilam | Alnilam is the central star of Orion's Belt in the equatorial constellation of Orion. It has the Bayer designation ε Orionis, which is Latinised to Epsilon Orionis and abbreviated Epsilon Ori or ε Ori. This is a massive, blue supergiant star some 1,200 light-years distant. It is estimated to be 419,600 times as luminous as the Sun, and 40 times as massive.
Observation
It is the 29th-brightest star in the sky (the fourth brightest in Orion) and is a blue supergiant. Together with Mintaka and Alnitak, the three stars make up Orion's Belt, known by many names across many ancient cultures. Alnilam is the middle star.
Since 1943, the spectrum of this star has served as one of the stable anchor points by which other stars are classified, for the spectral class B0Ia. Although the spectrum shows variations, particular in the H-alpha absorption lines, this is considered typical for this type of luminous hot supergiant. It is also one of the 58 stars used in celestial navigation. It is at its highest point in the sky around midnight on December 15.
It is slightly variable from magnitude 1.64 to 1.74, with no clear period, and it is classified as an α Cygni variable. Its spectrum also varies, possibly due to unpredictable changes in mass loss from the surface.
Physical characteristics
Estimates of Alnilam's properties vary. Searle and colleagues, using CMFGEN code to analyse the spectrum in 2008, calculated a luminosity of , an effective temperature of 27,500 ± 100 K and a radius of . Analysis of the spectra and age of the members of the Orion OB1 association yields a mass 34.6 times that of the Sun ( on the main sequence) and an age of 5.7 million years. A more recent detailed analysis of Alnilam across multiple wavelength bands produced very high luminosity, radius, and mass estimates, assuming the distance of 606 parsecs suggested by the Hipparcos new reduction. Adopting the larger parallax from the original Hipparcos reduction gives a distance of 412 parsecs and physical parameters more consistent with earlier publications. The luminosity of and the mass of at 606 parsecs is the highest ever derived for this star. Using precalculated models, a 2020 study found smaller values for luminosity (), radius (), and mass (). Another spectroscopic distance modulus of 7.79 imply a distance of 361 parsecs.
Alnilam's relatively simple spectrum has made it useful for studying the interstellar medium. Within the next million years, this star may turn into a Wolf-Rayet star and explode as a supernova. Alnilam's high mass means that due to high mass loss, it will not become a red supergiant star, and will likely leave behind a black hole instead of a neutron star. It is surrounded by a molecular cloud, NGC 1990, which it illuminates to make a reflection nebula. Its stellar winds may reach up to 2,000 km/s, causing it to lose mass about 20 million times more rapidly than the Sun.
Nomenclature and history
ε Orionis is the star's Bayer designation and 46 Orionis its Flamsteed designation.
The traditional name Alnilam derives from the Arabic النظام al-niẓām 'arrangement/string (of pearls)'. Related spellings are Alnihan and Alnitam: all three variants are evidently mistakes in transliteration or copy errors, the first perhaps due to confusion with النيلم al-nilam 'the sapphire'. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Alnilam for this star. It is now so entered in the IAU Catalog of Star Names.
Orion's Belt
The three belt stars were collectively known by many names in many cultures. Arabic terms include Al Nijād ('the Belt'), Al Nasak ('the Line'), Al Alkāt ('the Golden Grains or Nuts') and, in modern Arabic, Al Mīzān al H•akk ('the Accurate Scale Beam'). In Chinese mythology, they were also known as the Weighing Beam.
In Chinese, (), meaning Three Stars (asterism), refers to an asterism consisting of Alnilam, Alnitak and Mintaka (Orion's Belt), with Betelgeuse, Bellatrix, Saiph and Rigel later added. Consequently, the Chinese name for Alnilam is (, ). It is one of the western mansions of the White Tiger.
See also
List of most massive stars
Notes
References
External links
B-type supergiants
Alpha Cygni variables
Orion (constellation)
Orionis, Epsilon
1903
BD-01 0969
Orionis, 46
037128
026311
Alnilam | Alnilam | Astronomy | 1,072 |
17,280,954 | https://en.wikipedia.org/wiki/Lyons%20Ferry%20State%20Park | Lyons Ferry State Park is a public recreation area located near the confluence of the Snake and Palouse rivers, northwest of Starbuck, Washington. The state park is on Route 261, abreast of Lake Herbert G. West, a reservoir on the Snake River created with the construction in the 1960s of the Lower Monumental Dam some downstream. The park offers facilities for boating, fishing, and swimming. The area is managed cooperatively by the Washington State Parks and Recreation Commission and the U.S. Army Corps of Engineers, which operates the Lyons Ferry Marina.
History
The park bears the name of the Snake River ferry service, which ceased operations in 1968 after more than 100 years of service when it was replaced with the Snake River Bridge.
The U.S. Army Corps of Engineers began park construction in 1969, then leased the site to the state in 1971. It operated as a state park from 1971 until 2002, when the lease was relinquished by the state due to budget constraints. The Army Corps of Engineers operated the property as Lyons Ferry Park and Lyons Ferry Marina until 2015, when it returned to Washington State Park status with the signing of a new lease.
References
External links
Lyons Ferry State Park Washington State Parks and Recreation Commission
State parks of Washington (state)
Parks in Franklin County, Washington
United States Army Corps of Engineers
Protected areas established in 1971 | Lyons Ferry State Park | Engineering | 269 |
54,247,342 | https://en.wikipedia.org/wiki/Zeldovich%20mechanism | Zel'dovich mechanism is a chemical mechanism that describes the oxidation of nitrogen and NOx formation, first proposed by the Russian physicist Yakov Borisovich Zel'dovich in 1946. The reaction mechanisms read as
{N2} + O <->[k_1] {NO} + {N}
{N} + O2 <->[k_2] {NO} + {O}
where and are the reaction rate constants in Arrhenius law. The overall global reaction is given by
{N2} + {O2} <->[k] 2NO
The overall reaction rate is mostly governed by the first reaction (i.e., rate-determining reaction), since the second reaction is much faster than the first reaction and occurs immediately following the first reaction. At fuel-rich conditions, due to lack of oxygen, reaction 2 becomes weak, hence, a third reaction is included in the mechanism, also known as extended Zel'dovich mechanism (with all three reactions),
{N} + {OH} <->[k_3] {NO} + {H}
Assuming the initial concentration of NO is low and the reverse reactions can therefore be ignored, the forward rate constants of the reactions are given by
where the pre-exponential factor is measured in units of cm, mol, s and K (these units are incorrect), temperature in kelvins, and the activation energy in cal/mol; R is the universal gas constant.
NO formation
The rate of NO concentration increase is given by
N formation
Similarly, the rate of N concentration increase is
See also
Zeldovich–Liñán model
References
Combustion
Reaction mechanisms
Chemical reactions
Chemical kinetics
Pollutants | Zeldovich mechanism | Chemistry | 355 |
49,031,124 | https://en.wikipedia.org/wiki/Nitrokey | Nitrokey is an open-source USB key used to enable the secure encryption and signing of data. The secret keys are always stored inside the Nitrokey which protects against malware (such as computer viruses) and attackers. A user-chosen PIN and a tamper-proof smart card protect the Nitrokey in case of loss and theft. The hardware and software of Nitrokey are open-source. The free software and open hardware enables independent parties to verify the security of the device. Nitrokey is supported on Microsoft Windows, macOS, Linux, and BSD.
History
In 2008 Jan Suhr, Rudolf Böddeker, and another friend were travelling and found themselves looking to use encrypted emails in internet cafés, which meant the secret keys had to remain secure against computer viruses. Some proprietary USB dongles existed at the time, but lacked in certain ways. Consequently, they established as an open source project - Crypto Stick - in August 2008 which grew to become Nitrokey. It was a spare-time project of the founders to develop a hardware solution to enable the secure usage of email encryption. The first version of the Crypto Stick was released on 27 December 2009. In late 2014, the founders decided to professionalize the project, which was renamed Nitrokey. Nitrokey's firmware was audited by German cybersecurity firm Cure53 in May 2015, and its hardware was audited by the same company in August 2015. The first four Nitrokey models became available on 18 September 2015.
Technical features
Several Nitrokey models exist which each support different standards. For reference S/MIME is an email encryption standard popular with businesses while OpenPGP can be used to encrypt emails and also certificates used to login to servers with OpenVPN or OpenSSH. One-time passwords are similar to TANs and used as a secondary security measure in addition to ordinary passwords. Nitrokey supports the HMAC-based One-time Password Algorithm (HOTP, RFC 4226) and Time-based One-time Password Algorithm (TOTP, RFC 6238), which are compatible with Google Authenticator.
The Nitrokey Storage product has the same features as the Nitrokey Pro 2 and additionally contains an encrypted mass storage.
Characteristics
Nitrokey's devices store secret keys internally. As with earlier technologies including the trusted platform module they are not readable on demand. This reduces the likelihood of a private key being accidentally leaked which is a risk with software-based public key cryptography. The keys stored in this way are also not known to the manufacturer. Supported algorithms include AES-256 and RSA with key lengths of up to 2048 bits or 4096 bits depending on the model.
For accounts that accept Nitrokey credentials, a user-chosen PIN can be used to protect these against unauthorized access in case of loss or theft. However, loss of or damage to a Nitrokey (which is designed to last for 5-10 years) can also prevent the key's owner from being able to access his or her accounts. To guard against this, it is possible to generate keys in software so that they may be securely backed up to the best of the user's ability before they undergo a one-way transfer to the secure storage of a Nitrokey.
Nitrokey is published as open source software and free software which ensures a wide range of cross platform support including Microsoft Windows, macOS, Linux, and BSD. It is designed to be usable with popular software such as Microsoft Outlook, Mozilla Thunderbird, and OpenSSH. It is also open hardware to enable independent reviews of the source code and hardware layout and to ensure the absence of back doors and other security flaws.
Philosophy
Nitrokey's developers believe that proprietary systems cannot provide strong security and that security systems need to be open source. For instance there have been cases in which the NSA has intercepted security devices being shipped and implanted backdoors into them. In 2011 RSA was hacked and secret keys of securID tokens were stolen which allowed hackers to circumvent their authentication. As revealed in 2010, many FIPS 140-2 Level 2 certified USB storage devices from various manufacturers could easily be cracked by using a default password. Nitrokey, because of being open source and because of its transparency, wants to provide highly secure system and avoid security issues which its proprietary rivals are facing. Nitrokey's mission is to provide the best open source security key to protect the digital lives of its users.
References
External links
Authentication methods
Computer access control
Open hardware organizations and companies
Open hardware electronic devices
Open-source hardware
Cryptographic hardware | Nitrokey | Engineering | 968 |
19,465,059 | https://en.wikipedia.org/wiki/Potassium%20dideuterium%20phosphate | Deuterated potassium dihydrogen phosphate (KDPO or KHPO) or DKDP single crystals are widely used in non-linear optics as the second, third and fourth harmonic generators for Nd:YAG and Nd:YLF lasers. They are also found in electro-optical applications as Q-switches for Nd:YAG, Nd:YLF, alexandrite and Ti-sapphire lasers, as well as for Pockels cells.
DKDP is monopotassium phosphate (KDP, or KHPO), but using deuterium. Replacement of hydrogen by deuterium in DKDP lowers the frequency of O–H vibrations and their overtones (high-order harmonics). Absorption of light by those overtones is detrimental for the infrared lasers, which DKDP and KDP crystals are used for. Consequently, despite higher cost, DKDP is more popular than KDP.
DKDP crystals are grown by a water-solution method at usual level of deuteration >98%.
See also
Beta barium borate (BBO) – another popular non-linear crystal
Lithium triborate (LBO) – another popular non-linear crystal
Monopotassium phosphate (KDP) – another popular non-linear crystal
Non-linear optics
Potassium titanyl phosphate (KTP) – another popular non-linear crystal
Second-harmonic generation (SHG)
Third-harmonic generation (THG)
Two-photon absorption (TPA)
Organic nonlinear optical materials
References
Nonlinear optical materials
Phosphates
Potassium compounds
Deuterated compounds | Potassium dideuterium phosphate | Chemistry | 330 |
37,025,907 | https://en.wikipedia.org/wiki/Zeta%20Eridani | Zeta Eridani (ζ Eridani, abbreviated Zeta Eri, ζ Eri) is a binary star in the constellation of Eridanus. With an apparent visual magnitude of 4.80, it is visible to the naked eye on a clear dark night. Based on parallax measurements taken during the Hipparcos mission, it is approximately 110 light-years from the Sun.
Zeta Eridani is the primary or 'A' component of a multiple star system designated WDS J03158-0849 (the secondary or 'B' component is 14 Eridani). Zeta Eridani's two components are therefore designated WDS J03158-0849 Aa and Ab. Aa is formally named Zibal , the traditional name for the system.
Nomenclature
ζ Eridani (Latinised to Zeta Eridani) is the binary star's Bayer designation. WDS J03158-0849 A is its designation in the Washington Double Star Catalog. The designations of the two components as WDS J03158-0849 Aa and Ab derive from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU).
Zeta Eridani bore the traditional name of Zibal. This is an old misreading of the Arabic رئل riʼal "ostrich chicks" (with the carrier letter for the glottal stop taken for a 'b', and ر 'r' taken for ز 'z'), originally applied to a number of stars near Beid and Keid.
In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire multiple systems. It approved the name Zibal for the component WDS J03158-0849 Aa on 12 September 2016 and it is now so included in the List of IAU-approved Star Names.
Properties
Zeta Eridani is a single-lined spectroscopic binary system with an orbital period of 17.9 days and an eccentricity of 0.14. The primary is a mild Am star with a stellar classification of kA4hA9mA9V. This notation indicates this is a main-sequence star with the Ca-II K absorption line strength (k) of an A4 star, and the hydrogen lines (h) and metallic lines (m) of an A9 star. It has about 185% of the Sun's mass and 10.3 times the Sun's radius. This is a relatively young star with an estimated age of 800 million years, and it appears to have a moderately high rotation rate with a projected rotational velocity of 82 km/s.
The system displays a statistically significant infrared excess at a wavelength of 70 μm. This suggests the presence of an orbiting debris disk. The temperature of the dust is 70 K, indicating an orbital distance of 31 AU. It has an estimated mass of about 0.26% of the Earth.
References
External links
A-type main-sequence stars
Am stars
Eridanus (constellation)
Zibal
Eridani, Zeta
Eridani, 13
020320
015197
0984
Durchmusterung objects
Spectroscopic binaries | Zeta Eridani | Astronomy | 694 |
2,887,653 | https://en.wikipedia.org/wiki/Treadle | A treadle (from , "to tread") is a foot-powered lever mechanism; it is operated by treading on it repeatedly. A treadle, unlike some other types of pedals, is not directly mounted on the crank (see treadle bicycle for a clear example).
Most treadle machines convert reciprocating motion into rotating motion, using a mechanical linkage to indirectly connect one or two treadles to a crank. The treadle then turns the crank, which powers the machine. Other machines use treadles directly, to generate reciprocating motion. For instance, in a treadle loom, the reciprocating motion is used directly to lift and lower the harnesses or heddles; a common treadle pump uses the reciprocating motion to raise and lower pistons.
Before the widespread availability of electric power, treadles were the most common way to power a range of machines. They are still widely used as a matter of preference and necessity. A human-powered machine gives the human operator close, instinctive control over the rate at which energy is fed into the machine; this lets them easily vary the rate at which they work. Treadle-operated machines are also used in environments where electric power is not available to power electric machinery.
Other, similar mechanisms for allowing human and animal muscle to power machines are cranks, treadmills, treadwheels, and kick wheels like a potter's kick wheel.
Operation and uses
A treadle is operated by pressing down on it repeatedly with one or both feet, causing a rocking motion. This movement can then be stored as rotational motion via a crankshaft driving a flywheel. Alternatively, energy can be stored in a spring, as in the pole lathe.
Treadles were once used extensively to power most machines including lathes, rotating or reciprocating saws, spinning wheels, looms, and sewing machines.
Today the use of treadle-powered machines is common in areas of the developing world where other forms of power are unavailable. It is also common among artisans, hobbyists and historical re-enactors.
Some treadle looms in Africa and South Asia use toggles on a string as treadles. The toggles are held between the weaver's toes.
See also
Bicycle pedal
Treadle bicycle
Treadle pump
Sewing machine
References
Mechanical engineering
Human power
Foot
Mechanical hand tools | Treadle | Physics,Engineering | 487 |
27,557,070 | https://en.wikipedia.org/wiki/Mountain%20Pass%20Rare%20Earth%20Mine | The Mountain Pass Rare Earth Mine and Processing Facility, owned by MP Materials, is an open-pit mine of rare-earth elements on the south flank of the Clark Mountain Range in California, southwest of Las Vegas, Nevada. In 2020 the mine supplied 15.8% of the world's rare-earth production. It is the only rare-earth mining and processing facility in the United States. It is the largest single known deposit of such minerals.
As of 2022, work is ongoing to restore processing capabilities for domestic light rare-earth elements (LREEs) and work has been funded by the United States Department of Defense to restore processing capabilities for heavy rare-earth metals (HREEs) to alleviate supply chain risk.
Geology
The Mountain Pass deposit is in a 1.4 billion-year-old Precambrian carbonatite intruded into gneiss. It contains 8% to 12% rare-earth oxides, mostly contained in the mineral bastnäsite. Gangue minerals include calcite, barite, and dolomite. It is regarded as a world-class rare-earth mineral deposit. The metals that can be extracted from it include: cerium, lanthanum, neodymium, and europium.
At 1 July 2020, Proven and Probable Reserves, using a 3.83% total rare-earth oxide (REO) cutoff grade, were 18.9 million tonnes of ore containing 1.36 million tonnes of REO at an average grade of 7.06% REO. The ore body is about thick and long.
Ore processing
To process bastnäsite ore, it is finely ground and subjected to froth flotation to separate the bulk of the bastnäsite from the accompanying barite, calcite, and dolomite. Marketable products include each of the major intermediates of the ore dressing process: flotation concentrate, acid-washed flotation concentrate, calcined acid-washed bastnäsite, and finally a cerium concentrate, which was the insoluble residue left after the calcined bastnäsite had been leached with hydrochloric acid.
The lanthanides that dissolve as a result of the acid treatment are subjected to solvent extraction to capture the europium and purify the other individual components of the ore. A further product includes a lanthanide mix, depleted of much of the cerium, and essentially all of samarium and heavier lanthanides. The calcination of bastnäsite drives off the carbon dioxide content, leaving an oxide-fluoride, in which the cerium content oxidizes to the less-basic quadrivalent state. However, the high temperature of the calcination gives less-reactive oxide, and the use of hydrochloric acid, which can cause reduction of quadrivalent cerium, leads to an incomplete separation of cerium and the trivalent lanthanides.
History
Gold mining began at the site in 1936, but the rare earth deposits were not discovered until 1949 when prospectors in search of uranium noticed anomalously high radioactivity. Molybdenum Corporation of America bought most of the mining claims, and began small-scale production in 1952.
Production expanded greatly in the 1960s, to supply demand for europium used in color television screens. Between 1965 and 1995, the mine supplied most of the worldwide rare-earth metals consumption.
Molybdenum Corporation of America changed its name to Molycorp in 1974. The corporation was acquired by Union Oil in 1977, which in turn became part of Chevron Corporation in 2005.
In 1998, the mine's separation plant ceased production of refined rare-earth compounds; it continued to produce bastnäsite concentrate.
The mine closed in 2002 after a toxic waste spill and wasn't reopened due to competition from Chinese suppliers, though processing of previously mined ore continued.
In 2008, Chevron sold the mine to privately-held Molycorp Minerals LLC, a company formed to revive the Mountain Pass mine. Molycorp announced plans to spend $500 million to reopen and expand the mine, and on July 29, 2010, it raised about $400 million through an initial public offering, selling 28,125,000 shares at $14 under the ticker symbol MCP on the New York Stock Exchange.
In December 2010, Molycorp announced that it had secured all the environmental permits needed to build a new ore processing plant at the mine; construction would begin in January 2011, and was expected to be completed by the end of 2012. On August 27, 2012, the company announced that mining had restarted.
The processing plant was in full production on June 25, 2015, when Molycorp filed for Chapter 11 bankruptcy with outstanding bonds in the amount of $US 1.4 billion. The company's shares were removed from the NYSE.
In August 2015, it was reported that the mine was to be shut down.
On August 31, 2016, Molycorp Inc. emerged from bankruptcy as Neo Performance Materials, leaving behind the mine as Molycorp Minerals LLC in its own separate Chapter 11 bankruptcy. As of January 2016, its shares were traded OTC under the symbol MCPIQ.
Mountain Pass was acquired out of bankruptcy in July 2017 with the goal of reviving America's rare-earth industry. MP Materials resumed mining and refining operations in January 2018.
Current ownership
MP Materials is 51.8%-owned by US hedge funds JHL Capital Group (and its CEO James Litinsky) and QVT Financial LP, while Shenghe Resources, a partially state-owned enterprise of the Government of China, holds an 8.0% stake. Apart from institutions, the public owns 18%.
Environmental impact
In the 1980s, the company began piping wastewater up to 14 miles to evaporation ponds on or near Ivanpah Dry Lake, east of Interstate 15 near Nevada. This pipeline repeatedly ruptured during cleaning operations to remove mineral deposits called scale. The scale is radioactive because of the presence of thorium and radium, which occur naturally in the rare-earth ore. A federal investigation later found that some 60 spills—some unreported—occurred between 1984 and 1998, when the pipeline and chemical processing at the mine were shut down. In all, about 600,000 gallons of radioactive and other hazardous waste flowed onto the desert floor, according to federal authorities. By the end of the 1990s, Unocal was served with a cleanup order and a San Bernardino County district attorney's lawsuit. The company paid more than $1.4 million in fines and settlements. After preparing a cleanup plan and completing an extensive environmental study, Unocal in 2004 won approval of a county permit that allowed the mine to operate for another 30 years. The mine passed a key county inspection in 2007.
Current activity
Since 2007, China has restricted exports of REEs (rare-earth elements) and imposed export tariffs, both to conserve resources and to give preference to Chinese manufacturers. In 2009, China supplied more than 96% of the world's REEs. Some outside China are concerned that because rare-earths are essential to some high-tech, renewable-energy, and defense-related technologies, the world should not be so reliant on a single supplier country
On September 22, 2010, China quietly enacted a ban on exports of rare-earths to Japan, a move suspected to be in retaliation for the Japanese arrest of a Chinese trawler captain in a territorial dispute. Because Japan and China are the only current sources for rare-earth magnetic material used in the US, a permanent disruption of Chinese rare-earth supply to Japan would leave China as the sole source. Jeff Green, a rare-earth lobbyist, said, "We are going to be 100 percent reliant on the Chinese to make the components for the defense supply chain." The House Committee on Science and Technology scheduled on September 23, 2010, the review of a detailed bill to subsidize the revival of the American rare-earths industry, including the reopening of the Mountain Pass mine.
After China doubled import duties on rare-earth concentrates to 25% as a result of the US-China trade war, MP Materials said, in May 2019, it will start its own partial processing operation in the United States, though full processing operations without Shenghe Resources have been delayed. According to Bloomberg, China in 2019 established a plan for restricting U.S. access to Chinese heavy rare earth elements, should the punitive step be deemed necessary. In 2022, the company announced that it had secured Department of Defense grants to support both light rare-earth elements (LREEs) and heavy rare earth elements (HREEs). The facility plans to begin separating NdPr oxide in early 2023.
References
Further reading
External links
Mountain Pass mine: geoology, history & potential February 1, 2023, "Geology for Investors"
Buildings and structures in San Bernardino County, California
Carbonatite occurrences
Geography of San Bernardino County, California
Metallurgical facilities
Mines in California
Rare earth mines
Surface mines in the United States | Mountain Pass Rare Earth Mine | Chemistry,Materials_science | 1,868 |
70,463,845 | https://en.wikipedia.org/wiki/Huawei%20Nova%208 | Huawei Nova 8 is a smartphone manufactured by Huawei. It is a part of Huawei Nova series. It was announced on August 5, 2021.
References
Nova 8
Mobile phones introduced in 2021
Mobile phones with multiple rear cameras
Mobile phones with 4K video recording | Huawei Nova 8 | Technology | 54 |
45,486,934 | https://en.wikipedia.org/wiki/Productive%20matrix | In linear algebra, a square nonnegative matrix of order is said to be productive, or to be a Leontief matrix, if there exists a nonnegative column matrix such as is a positive matrix.
History
The concept of productive matrix was developed by the economist Wassily Leontief (Nobel Prize in Economics in 1973) in order to model and analyze the relations between the different sectors of an economy. The interdependency linkages between the latter can be examined by the input-output model with empirical data.
Explicit definition
The matrix is productive if and only if and such as .
Here denotes the set of r×c matrices of real numbers, whereas and indicates a positive and a nonnegative matrix, respectively.
Properties
The following properties are proven e.g. in the textbook (Michel 1984).
Characterization
Theorem
A nonnegative matrix is productive if and only if is invertible with a nonnegative inverse, where denotes the identity matrix.
Proof
"If" :
Let be invertible with a nonnegative inverse,
Let be an arbitrary column matrix with .
Then the matrix is nonnegative since it is the product of two nonnegative matrices.
Moreover, .
Therefore is productive.
"Only if" :
Let be productive, let such that .
The proof proceeds by reductio ad absurdum.
First, assume for contradiction is singular.
The endomorphism canonically associated with can not be injective by singularity of the matrix.
Thus some non-zero column matrix exists such that .
The matrix has the same properties as , therefore we can choose as an element of the kernel with at least one positive entry.
Hence is nonnegative and reached with at least one value .
By definition of and of , we can infer that:
, using that by construction.
Thus , using that by definition of .
This contradicts and , hence is necessarily invertible.
Second, assume for contradiction is invertible but with at least one negative entry in its inverse.
Hence such that there is at least one negative entry in .
Then is positive and reached with at least one value .
By definition of and of , we can infer that:
, using that by construction
using that by definition of .
Thus , contradicting .
Therefore is necessarily nonnegative.
Transposition
Proposition
The transpose of a productive matrix is productive.
Proof
Let a productive matrix.
Then exists and is nonnegative.
Yet
Hence is invertible with a nonnegative inverse.
Therefore is productive.
Application
With a matrix approach of the input-output model, the consumption matrix is productive if it is economically viable and if the latter and the demand vector are nonnegative.
References
Mathematical economics
Linear algebra
Matrices
Matrix theory | Productive matrix | Mathematics | 558 |
55,012,592 | https://en.wikipedia.org/wiki/Christian%20Hamel | Christian Hamel (4 October 1955 – 15 August 2017) was a French Professor at the Institute for Neurosciences of Montpellier, Hôpital Saint Eloi (INM) research unit INSERM 583 of the University. He studied transduction, integration and disorders of sensory and motor systems with the ultimate goal of finding treatments for degeneration of the retina and optic nerve.
Hamel discovered and described in 1993 the RPE65 protein. Retinal pigment epithelium-specific 65 kDa protein is an enzyme in the vertebral visual pigment. The next year he mapped the RPE65 gene to human chromosome 1 (mouse chromosome 3) and refined it to 1p31 by fluorescence in situ hybridization. His research interests were to find the causes of inherited diseases of the retina and optic nerve.
References
1955 births
2017 deaths
French medical researchers
Genetic engineering
Engineering | Christian Hamel | Chemistry,Engineering,Biology | 182 |
75,551,391 | https://en.wikipedia.org/wiki/Tylvalosin | Tylvalosin, sold under the brand name Aivlosin, is a macrolide antibiotic used for the treatment of bacterial infections with Mycoplasma hyopneumoniae in swine, that causes enzootic pneumonia. It is used as tylvalosin tartrate.
Mechanism of Action
Macrolides are generally considered to be bacteriostatic agents that exert their antibiotic effect by reversibly binding to the 23S rRNA of the 50S ribosomal subunit, thereby inhibiting bacterial protein synthesis.
Medical uses
Tylvalosin is indicated for the control of porcine proliferative enteropathy associated with Lawsonia intracellularis infection in groups of swine intended for slaughter and female swine intended for breeding in buildings experiencing an outbreak of PPE. Not for use in male swine intended for breeding; and for the control of swine respiratory disease associated with Bordetella bronchiseptica, Glaesserella (Haemophilus) parasuis, Pasteurella multocida, Streptococcus suis, and Mycoplasma hyopneumoniae in groups of swine intended for slaughter and female swine intended for breeding in buildings experiencing an outbreak of swine respiratory disease. Not for use in male swine intended for breeding.
References
Macrolide antibiotics
Veterinary drugs
Sugar alcohols
Acetate esters
Tertiary amines
Methoxy compounds
Conjugated dienes
Ketenes | Tylvalosin | Chemistry | 309 |
48,060,564 | https://en.wikipedia.org/wiki/Evgenii%20Nikishin | Evgenii Mikhailovich Nikishin (Евгений Михайлович Никишин; 23 June 1945, in Penza Oblast – 17 December 1986) was a Russian mathematician, who specialized in harmonic analysis.
Biography
Nikishin, at age of 24, earned his candidate doctorate at Moscow State University, becoming the youngest Candidate Doctorate in a history of MSU and in 1971 his habilitation (Russian doctorate) at the Steklov Institute under Pyotr Ulyanov (1928–2006). In 1977 he became a professor at Moscow State University, where he remained until his death after a long battle with cancer.
He worked on approximation theory, especially Padé approximants. Nikishin systems of functions are named after him. Also named in his honour is the Nikishin-Stein factorisation theorem, which is a 1970 generalization by Nikishin of the Stein factorisation theorem. Nikishin also did research on rational approximations in number theory and wrote a monograph on such approximations in a unified approach that also treated rational approximations in function spaces.
In 1972 he won the Lenin Komsomol Prize and in 1973 he won the Salem Prize, that awarded every year to a young mathematician judged to have done outstanding work world wide. In 1978 he was an Invited Speaker (The Padé Approximants) at the International Congress of Mathematicians in Helsinki.
Evgeniy was a long friend and colleague of Anatoly Fomenko with whom they were developing a revising historical chronology.
Selected publications
with Vladimir Nikolaevich Sorokin:
References
External links
Mathnet.ru
1945 births
1986 deaths
20th-century Russian mathematicians
Soviet mathematicians
Mathematical analysts | Evgenii Nikishin | Mathematics | 350 |
32,848,664 | https://en.wikipedia.org/wiki/Big%20q-Laguerre%20polynomials | In mathematics, the big q-Laguerre polynomials are a family of basic hypergeometric orthogonal polynomials in the basic Askey scheme. give a detailed list of their properties.
Definition
The polynomials are given in terms of basic hypergeometric functions and the q-Pochhammer symbol by
Relation to other polynomials
Big q-Laguerre polynomials→Laguerre polynomials
References
Orthogonal polynomials
Q-analogs
Special hypergeometric functions | Big q-Laguerre polynomials | Mathematics | 88 |
48,279,605 | https://en.wikipedia.org/wiki/Lentinus%20megacystidiatus | Lentinus megacystidiatus is a species of edible mushroom in the family Polyporaceae, first found in northern Thailand.
References
Further reading
Senthilarasu, Gunasekaran. "the lentinoid fungi (Lentinus and Panus) from Western ghats, India." (2015).
Quél, C. E. R. I. O. P. O. R. U. S. "Zmitrovich IV, Kovalenko AE Lentinoid and polyporoid fungi, two generic conglomerates containing important medicinal mushrooms in molecular perspective International Journal of Medicinal Mushrooms." International Journal of Medicinal Mushrooms (2015).
Njouonkou, André-Ledoux, Roy Watling, and Jérôme Degreef. "Lentinus cystidiatus sp. nov.(Polyporaceae): an African lentinoid fungus with an unusual combination of both skeleto-ligative hyphae and pleurocystidia." Plant Ecology and Evolution 146.2 (2013): 240–245.
Polyporaceae
Fungi described in 2011
Fungi of Asia
Fungus species | Lentinus megacystidiatus | Biology | 229 |
5,729,336 | https://en.wikipedia.org/wiki/Countershading | Countershading, or Thayer's law, is a method of camouflage in which an animal's coloration is darker on the top or upper side and lighter on the underside of the body. This pattern is found in many species of mammals, reptiles, birds, fish, and insects, both in predators and in prey.
When light falls from above on a uniformly coloured three-dimensional object such as a sphere, it makes the upper side appear lighter and the underside darker, grading from one to the other. This pattern of light and shade makes the object appear solid, and therefore easier to detect. The classical form of countershading, discovered in 1909 by the artist Abbott Handerson Thayer, works by counterbalancing the effects of self-shadowing, again typically with grading from dark to light. In theory this could be useful for military camouflage, but in practice it has rarely been applied, despite the best efforts of Thayer and, later, in the Second World War, of the zoologist Hugh Cott.
The precise function of various patterns of animal coloration that have been called countershading has been debated by zoologists such as Hannah Rowland (2009), with the suggestion that there may be multiple functions including flattening and background matching when viewed from the side; background matching when viewed from above or below, implying separate colour schemes for the top and bottom surfaces; outline obliteration from above; and a variety of other largely untested non-camouflage theories. A related mechanism, counter-illumination, adds the creation of light by bioluminescence or lamps to match the actual brightness of a background. Counter-illumination camouflage is common in marine organisms such as squid. It has been studied up to the prototype stage for military use in ships and aircraft, but it too has rarely or never been used in warfare.
The reverse of countershading, with the belly pigmented darker than the back, enhances contrast and so makes animals more conspicuous. It is found in animals that can defend themselves, such as skunks. The pattern is used both in startle or deimatic displays and as a signal to warn off experienced predators. However, animals that habitually live upside-down but lack strong defences, such as the Nile catfish and the Luna moth caterpillar, have upside-down countershading for camouflage.
Early research
The English zoologist Edward Bagnall Poulton, author of The Colours of Animals (1890) discovered the countershading of various insects, including the pupa or chrysalis of the purple emperor butterfly, Apatura iris, the caterpillar larvae of the brimstone moth, Opisthograptis luteolata and of the peppered moth, Biston betularia. However he did not use the term countershading, nor did he suggest that the effect occurred widely.
The New Hampshire artist Abbott Handerson Thayer was one of the first to study and write about countershading. In his 1909 book Concealing-Coloration in the Animal Kingdom, he correctly described and illustrated countershading with photographs and paintings, but wrongly claimed that almost all animals are countershaded. For this reason countershading is sometimes called Thayer's law. Thayer wrote:
Thayer observed and painted a number of examples, including the Luna moth caterpillar Actias luna, both in its habitual upside-down feeding position, where its countershading makes it appear flat, and artificially inverted from that position, where sunlight and its inverted countershading combine to make it appear heavily shaded and therefore solid. Thayer obtained a patent in 1902 to paint warships, both submarines and surface ships, using countershading, but failed to convince the US Navy to adopt his ideas.
Hugh Bamford Cott in his 1940 book Adaptive Coloration in Animals described many instances of countershading, following Thayer in general approach but criticising Thayer's excessive claim ("He says 'All patterns and colors whatsoever of all animals that ever prey or are preyed upon are under certain normal circumstances obliterative.'") that effectively all animals are camouflaged with countershading. Cott called this "Thayer straining the theory to a fantastic extreme".
Both Thayer and Cott included in their books photographs of a non-countershaded white cockerel against a white background, to make the point that in Thayer's words "a monochrome object can not be 'obliterated', no matter what its background" or in Cott's words "Colour resemblance alone is not sufficient to afford concealment". Cott explained that
Application
In animals
Countershading is observed in a wide range of animal groups, both terrestrial, such as deer, and marine, such as sharks. It is the basis of camouflage in both predators and prey. It is used alongside other forms of camouflage including colour matching and disruptive coloration. Among predatory fish, the gray snapper, Lutianus griseus, is effectively flattened by its countershading, while it hunts an "almost invisible" prey, the hardhead silverside, Atherina laticeps which swims over greyish sands. Other countershaded marine animals include blue shark, herring, and dolphin; while fish such as the mackerel and sergeant fish are both countershaded and patterned with stripes or spots.
Mesozoic marine reptiles had countershading. Fossilised skin pigmented with dark-coloured eumelanin reveals that ichthyosaurs, leatherback turtles and mosasaurs had dark backs and light bellies. The ornithischian dinosaur Psittacosaurus similarly appears to have been countershaded, implying that its predators detected their prey by deducing shape from shading. Modelling suggests further that the dinosaur was optimally countershaded for a closed habitat such as a forest.
Counter-illumination
Another form of animal camouflage uses bioluminescence to increase the average brightness of an animal to match the brightness of the background. This is called counter-illumination. It is common in mid-water pelagic fish and invertebrates especially squid. It makes the counter-illuminated animal practically invisible to predators viewing it from below. As such, counter-illumination camouflage can be seen as an extension beyond what countershading can achieve. Where countershading only paints out shadows, counter-illumination can add in actual lights, permitting effective camouflage in changing conditions, including where the background is bright enough to make an animal that is not counter-illuminated appear as a shadow.
Military
Countershading, like counter-illumination, has rarely been applied in practice for military camouflage, though not because military authorities were unaware of it. Both Abbott Thayer in the First World War and Hugh Cott in the Second World War proposed countershading to their countries' armed forces. They each demonstrated the effectiveness of countershading, without succeeding in persuading their armed forces to adopt the technique, though they influenced military adoption of camouflage in general.
Cott was a protege of John Graham Kerr who had quarrelled with Norman Wilkinson in the First World War about dazzle camouflage for ships. Wilkinson remained influential in 1939 as an inspector of camouflage, so a political argument developed. Cott was invited to camouflage a 12-inch rail-mounted gun, alongside a similar gun camouflaged conventionally. Cott carefully combined disruptive contrast to break up the gun barrel's outlines with countershading to flatten out its appearance as a solid cylinder. The guns were then photographed from the air from various angles, and in Peter Forbes's view "the results were remarkable." Cott's gun is "invisible except to the most minute scrutiny by someone who knows exactly where to look and what to look for. The other gun is always highly visible." The authorities hesitated, appearing to be embarrassed by the evidence that Cott was right, and argued that countershading would be too difficult to use as an expert zoologist would be needed to supervise every installation. Cott was posted to the Middle East, and Kerr unsuccessfully intervened, pleading for guns to be painted Cott's way and Cott to be brought home.
The Australian zoologist William Dakin in his 1941 book The Art of Camouflage followed Thayer in describing countershading in some detail, and the book was reprinted as a military handbook in 1942. Dakin photographed model birds, much as Thayer and Cott had done, and argued that the shoulders and arms of battledress should be countershaded.
Countershading was described in the US War Department's 1943 Principles of Camouflage, where after four paragraphs of theory and one on its use in nature, the advice given is that:
Inventors have continued to advocate military usage of countershading, with for example a 2005 US patent for personal camouflage including countershading in the form of "statistical countercoloring" with varying sizes of rounded dark patches on a lighter ground.
Research by Ariel Tankus and Yehezkel Yeshurun investigating "camouflage breaking", the automated detection of objects such as tanks, showed that analysing images for convexity by looking for graded shadows can "break very strong camouflage, which might delude even human viewers." More precisely, images are searched for places where the gradient of brightness crosses zero, such as the line where a shadow stops becoming darker and starts to become lighter again. The technique defeated camouflage using disruption of edges, but the authors observed that animals with Thayer countershading are using "counter-measures to convexity based
detectors", which implied "predators who use convexity based detectors."
Function
Hannah Rowland, reviewing countershading 100 years after Abbott Thayer, observed that countershading, which she defines as "darker pigmentation on those surfaces exposed to the most lighting" is a common but poorly understood aspect of animal coloration. She noted there had been "much debate" about how countershading works. She considered the evidence for Thayer's theory that this acts as camouflage "by reducing ventral shadowing", and reviewed alternative explanations for countershading.
Camouflage theories of countershading, Rowland wrote, include "self-shadow concealment which results in improved background matching when viewed from the side"; "self-shadow concealment that flattens the form when viewed from the side"; "background matching when viewed from above or below"; and "body outline obliteration when viewed from above". These are examined in turn below.
Flattening and background matching when viewed from the side
Cott, like Thayer, argued that countershading would make animals hard to see from the side, as they would "fade into a ghostly elusiveness". Rowland notes that Cott is here reviewing Thayer's theory and "reinforcing the view that a gradation in shading would act to eliminate the effects of ventral shadowing." Kiltie measured the effect of the countershading of the eastern gray squirrel, Sciurus carolinensis, showing that when the squirrel is horizontal the self-shadowing of the belly is partly concealed, but that when the squirrel is vertical (as when climbing a tree trunk) this effect did not occur.
Thayer's original argument, restated by Cott, was that nature did the exact opposite with countershading that an artist did with paint when creating the illusion of solid three-dimensionality, namely counteracting the effect of shade to flatten out form. Shading is a powerful cue used by animals in different phyla to identify the shapes of objects. Research with chicks showed that they preferred to peck at grains with shadows falling below them (as if illuminated from above), so both humans and birds may make use of shading as a depth cue.
Background matching from above or below
A completely different function of animal (and military vehicle) coloration is to camouflage the top and bottom surfaces differently, to match their backgrounds below and above respectively. This was noted, for example, by Frank Evers Beddard in 1892:
Early researchers including Alfred Russel Wallace, Beddard, Cott and Craik argued that in marine animals including pelagic fish such as marlin and mackerel, as well as dolphins, sharks, and penguins the upper and lower surfaces are sharply distinct in tone, with a dark upper surface and often a nearly white lower surface. They suggested that when seen from the top, the darker dorsal surface of the animal would offer camouflage against the darkness of the deep water below. When seen from below, the lighter ventral area would similarly provide the least possible contrast with the sunlit ocean surface above. There is some evidence for this in birds, where birds that catch fish at a medium depth, rather than at the surface or on the seabed, are more often coloured in this way, and the prey of these birds would see only the underside of the bird. Rowland concluded that each possible role for coloration patterns lumped together as "countershading" needs to be evaluated separately, rather than just assuming it functions effectively.
Outline obliteration from above
Rowland (2009) identified an additional mechanism of countershading not previously analysed, namely that a round body such as a cylinder illuminated and seen from above appears to have dark sides. Using a graphics tool, she demonstrated that this effect can be flattened out by countershading. Since predators are known to use edges to identify prey, countershading may therefore, she argues, make prey harder to detect when seen from above.
Non-camouflage theories
Non-camouflage theories include protection from ultraviolet light; thermoregulation; and protection from abrasion. All three of these "plausible" theories remained largely untested in 2009, according to Rowland.
Evidence
Despite demonstrations and examples adduced by Cott and others, little experimental evidence for the effectiveness of countershading was gathered in the century since Thayer's discovery. Experiments in 2009 using artificial prey showed that countershaded objects do have survival benefits and in 2012, a study by William Allen and colleagues showed that countershading in 114 species of ruminants closely matched predictions for "self-shadow concealment", the function predicted by Poulton, Thayer and Cott.
Mechanism
Evolutionary developmental biology has assembled evidence from embryology and genetics to show how evolution has acted at all scales from the whole organism down to individual genes, proteins and genetic switches. In the case of countershaded mammals with dark (often brownish) upper parts and lighter (often buff or whitish) under parts, such as in the house mouse, it is the Agouti gene which creates the difference in shading. Agouti encodes for a protein, the Agouti signalling peptide (ASP), which specifically inhibits the action of the Melanocortin 1 receptor (MC1R). In the absence of the Agouti protein, alpha-melanocyte-stimulating hormone stimulates the cells bearing MC1R, melanocytes, to produce dark eumelanin, colouring the skin and fur dark brown or black. In the presence of the Agouti protein, the same system produces the lighter-coloured, yellow or red phaeomelanin. A genetic switch active in the cells of the embryo that will become the belly skin causes the Agouti gene to become active there, creating the countershading seen in adult mammals.
Reverse countershading
If countershading paints out shadows, the reverse, darkening the belly and lightening the back, would maximise contrast by adding to the natural fall of light. This pattern of animal coloration is found in animals such as the skunk and honey badger with strong defences—the offensive stink of the skunk, and the sharp claws, aggressive nature and stink of the honey badger. These animals do not run when under attack, but move slowly, often turning to face the danger, and giving deimatic or threat displays either to startle inexperienced predators, or as an aposematic signal, to warn off experienced ones.
The caterpillar of the Luna moth, as discovered by Thayer, is in Cott's phrase "countershaded in relation to [its] attitude", i.e. shaded with a light back grading to a dark belly, as is the Nile catfish, Synodontis batensoda for the same reason: these animals (and other caterpillars including Automeris io and the eyed hawkmoth, Smerinthus ocellatus) habitually live 'upside down' with the belly uppermost. Similarly in the sea slug Glaucus atlanticus, the reverse countershading is associated with inverted habits. These animals are thus employing countershading in the usual way for camouflage.
Examples in animals
See also
Synodontis nigriventris, an "upside-down" catfish (with reverse countershading)
Counterchanging, a heraldic device of similar appearance
Notes
References
Bibliography
Pioneering books
General reading
Journals
Deception
Antipredator adaptations
Camouflage mechanisms | Countershading | Biology | 3,437 |
11,331,197 | https://en.wikipedia.org/wiki/Dermea%20pseudotsugae | Dermea pseudotsugae is a plant pathogen.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Dermateaceae
Fungus species | Dermea pseudotsugae | Biology | 37 |
265,044 | https://en.wikipedia.org/wiki/Propellant | A propellant (or propellent) is a mass that is expelled or expanded in such a way as to create a thrust or another motive force in accordance with Newton's third law of motion, and "propel" a vehicle, projectile, or fluid payload. In vehicles, the engine that expels the propellant is called a reaction engine. Although technically a propellant is the reaction mass used to create thrust, the term "propellant" is often used to describe a substance which contains both the reaction mass and the fuel that holds the energy used to accelerate the reaction mass. For example, the term "propellant" is often used in chemical rocket design to describe a combined fuel/propellant, although the propellants should not be confused with the fuel that is used by an engine to produce the energy that expels the propellant. Even though the byproducts of substances used as fuel are also often used as a reaction mass to create the thrust, such as with a chemical rocket engine, propellant and fuel are two distinct concepts.
Vehicles can use propellants to move by ejecting a propellant backwards which creates an opposite force that moves the vehicle forward. Projectiles can use propellants that are expanding gases which provide the motive force to set the projectile in motion. Aerosol cans use propellants which are fluids that are compressed so that when the propellant is allowed to escape by releasing a valve, the energy stored by the compression moves the propellant out of the can and that propellant forces the aerosol payload out along with the propellant. Compressed fluid may also be used as a simple vehicle propellant, with the potential energy that is stored in the compressed fluid used to expel the fluid as the propellant. The energy stored in the fluid was added to the system when the fluid was compressed, such as compressed air. The energy applied to the pump or thermal system that is used to compress the air is stored until it is released by allowing the propellant to escape. Compressed fluid may also be used only as energy storage along with some other substance as the propellant, such as with a water rocket, where the energy stored in the compressed air is the fuel and the water is the propellant.
In electrically powered spacecraft, electricity is used to accelerate the propellant. An electrostatic force may be used to expel positive ions, or the Lorentz force may be used to expel negative ions and electrons as the propellant. Electrothermal engines use the electromagnetic force to heat low molecular weight gases (e.g. hydrogen, helium, ammonia) into a plasma and expel the plasma as propellant. In the case of a resistojet rocket engine, the compressed propellant is simply heated using resistive heating as it is expelled to create more thrust.
In chemical rockets and aircraft, fuels are used to produce an energetic gas that can be directed through a nozzle, thereby producing thrust. In rockets, the burning of rocket fuel produces an exhaust, and the exhausted material is usually expelled as a propellant under pressure through a nozzle. The exhaust material may be a gas, liquid, plasma, or a solid. In powered aircraft without propellers such as jets, the propellant is usually the product of the burning of fuel with atmospheric oxygen so that the resulting propellant product has more mass than the fuel carried on the vehicle.
Proposed photon rockets would use the relativistic momentum of photons to create thrust. Even though photons do not have mass, they can still act as a propellant because they move at relativistic speed, i.e., the speed of light. In this case Newton's third Law of Motion is inadequate to model the physics involved and relativistic physics must be used.
In chemical rockets, chemical reactions are used to produce energy which creates movement of a fluid which is used to expel the products of that chemical reaction (and sometimes other substances) as propellants. For example, in a simple hydrogen/oxygen engine, hydrogen is burned (oxidized) to create and the energy from the chemical reaction is used to expel the water (steam) to provide thrust. Often in chemical rocket engines, a higher molecular mass substance is included in the fuel to provide more reaction mass.
Rocket propellant may be expelled through an expansion nozzle as a cold gas, that is, without energetic mixing and combustion, to provide small changes in velocity to spacecraft by the use of cold gas thrusters, usually as maneuvering thrusters.
To attain a useful density for storage, most propellants are stored as either a solid or a liquid.
Vehicle propellants
A rocket propellant is a mass that is expelled from a vehicle, such as a rocket, in such a way as to create a thrust in accordance with Newton's third law of motion, and "propel" the vehicle forward. The engine that expels the propellant is called a reaction engine. Although the term "propellant" is often used in chemical rocket design to describe a combined fuel/propellant, propellants should not be confused with the fuel that is used by an engine to produce the energy that expels the propellant. Even though the byproducts of substances used as fuel are also often used as a reaction mass to create the thrust, such as with a chemical rocket engine, propellant and fuel are two distinct concepts.
In electrically powered spacecraft, electricity is used to accelerate the propellant. An electrostatic force may be used to expel positive ions, or the Lorentz force may be used to expel negative ions and electrons as the propellant. Electrothermal engines use the electromagnetic force to heat low molecular weight gases (e.g. hydrogen, helium, ammonia) into a plasma and expel the plasma as propellant. In the case of a resistojet rocket engine, the compressed propellant is simply heated using resistive heating as it is expelled to create more thrust.
In chemical rockets and aircraft, fuels are used to produce an energetic gas that can be directed through a nozzle, thereby producing thrust. In rockets, the burning of rocket fuel produces an exhaust, and the exhausted material is usually expelled as a propellant under pressure through a nozzle. The exhaust material may be a gas, liquid, plasma, or a solid. In powered aircraft without propellers such as jets, the propellant is usually the product of the burning of fuel with atmospheric oxygen so that the resulting propellant product has more mass than the fuel carried on the vehicle.
The propellant or fuel may also simply be a compressed fluid, with the potential energy that is stored in the compressed fluid used to expel the fluid as the propellant. The energy stored in the fluid was added to the system when the fluid was compressed, such as compressed air. The energy applied to the pump or thermal system that is used to compress the air is stored until it is released by allowing the propellant to escape. Compressed fluid may also be used only as energy storage along with some other substance as the propellant, such as with a water rocket, where the energy stored in the compressed air is the fuel and the water is the propellant.
Proposed photon rockets would use the relativistic momentum of photons to create thrust. Even though photons do not have mass, they can still act as a propellant because they move at relativistic speed, i.e., the speed of light. In this case Newton's third Law of Motion is inadequate to model the physics involved and relativistic physics must be used.
In chemical rockets, chemical reactions are used to produce energy which creates movement of a fluid which is used to expel the products of that chemical reaction (and sometimes other substances) as propellants. For example, in a simple hydrogen/oxygen engine, hydrogen is burned (oxidized) to create and the energy from the chemical reaction is used to expel the water (steam) to provide thrust. Often in chemical rocket engines, a higher molecular mass substance is included in the fuel to provide more reaction mass.
Rocket propellant may be expelled through an expansion nozzle as a cold gas, that is, without energetic mixing and combustion, to provide small changes in velocity to spacecraft by the use of cold gas thrusters, usually as maneuvering thrusters.
To attain a useful density for storage, most propellants are stored as either a solid or a liquid.
Propellants may be energized by chemical reactions to expel solid, liquid or gas. Electrical energy may be used to expel gases, plasmas, ions, solids or liquids. Photons may be used to provide thrust via relativistic momentum.
Chemically powered
Solid propellant
Composite propellants made from a solid oxidizer such as ammonium perchlorate or ammonium nitrate, a synthetic rubber such as HTPB, PBAN, or Polyurethane (or energetic polymers such as polyglycidyl nitrate or polyvinyl nitrate for extra energy), optional high-explosive fuels (again, for extra energy) such as RDX or nitroglycerin, and usually a powdered metal fuel such as aluminum.
Some amateur propellants use potassium nitrate, combined with sugar, epoxy, or other fuels and binder compounds.
Potassium perchlorate has been used as an oxidizer, paired with asphalt, epoxy, and other binders.
Propellants that explode in operation are of little practical use currently, although there have been experiments with Pulse Detonation Engines. Also the newly synthesized bishomocubane based compounds are under consideration in the research stage as both solid and liquid propellants of the future.
Grain
Solid fuel/propellants are used in forms called grains. A grain is any individual particle of fuel/propellant regardless of the size or shape. The shape and size of a grain determines the burn time, amount of gas, and rate of produced energy from the burning of the fuel and, as a consequence, thrust vs time profile.
There are three types of burns that can be achieved with different grains.
Progressive burn Usually a grain with multiple perforations or a star cut in the center providing a lot of surface area.
Degressive burn Usually a solid grain in the shape of a cylinder or sphere.
Neutral burn Usually a single perforation; as outside surface decreases the inside surface increases at the same rate.
Composition
There are four different types of solid fuel/propellant compositions:
Single-based fuel/propellant A single based fuel/propellant has nitrocellulose as its chief explosives ingredient. Stabilizers and other additives are used to control the chemical stability and enhance its properties.
Double-based fuel/propellant Double-based fuel/propellants consist of nitrocellulose with nitroglycerin or other liquid organic nitrate explosives added. Stabilizers and other additives are also used. Nitroglycerin reduces smoke and increases the energy output. Double-based fuel/propellants are used in small arms, cannons, mortars and rockets.
Triple-based fuel/propellant Triple-based fuel/propellants consist of nitrocellulose, nitroguanidine, nitroglycerin or other liquid organic nitrate explosives. Triple-based fuel/propellants are used in cannons.
Composite Composites do not utilize nitrocellulose, nitroglycerin, nitroguanidine or any other organic nitrate as the primary constituent. Composites usually consist of a fuel such as metallic aluminum, a combustible binder such as synthetic rubber or HTPB, and an oxidizer such as ammonium perchlorate. Composite fuel/propellants are used in large rocket motors. In some applications, such as the US SLBM Trident II missile, nitroglycerin is added to the aluminum and ammonium perchlorate composite as an energetic plasticizer.
Liquid propellant
In rockets, three main liquid bipropellant combinations are used: cryogenic oxygen and hydrogen, cryogenic oxygen and a hydrocarbon, and storable propellants.
Cryogenic oxygen-hydrogen combination system Used in upper stages and sometimes in booster stages of space launch systems. This is a nontoxic combination. This gives high specific impulse and is ideal for high-velocity missions.
Cryogenic oxygen-hydrocarbon propellant system Used for many booster stages of space launch vehicles as well as a smaller number of second stages. This combination of fuel/oxidizer has high density and hence allows for a more compact booster design.
Storable propellant combinations Used in almost all bipropellant low-thrust, auxiliary or reaction control rocket engines, as well as in some in large rocket engines for first and second stages of ballistic missiles. They are instant-starting and suitable for long-term storage.
Propellant combinations used for liquid propellant rockets include:
Liquid oxygen and liquid hydrogen
Liquid oxygen and kerosene or RP-1
Liquid oxygen and ethanol
Liquid oxygen and methane
Hydrogen peroxide and mentioned above alcohol or RP-1
Red fuming nitric acid (RFNA) and kerosene or RP-1
RFNA and Unsymmetrical dimethylhydrazine (UDMH)
Dinitrogen tetroxide and UDMH, MMH, and/or hydrazine
Common monopropellant used for liquid rocket engines include:
Hydrogen peroxide
Hydrazine
Red fuming nitric acid (RFNA)
Electrically powered
Electrically powered reactive engines use a variety of usually ionized propellants, including atomic ions, plasma, electrons, or small droplets or solid particles as propellant.
Electrostatic
If the acceleration is caused mainly by the Coulomb force (i.e. application of a static electric field in the direction of the acceleration) the device is considered electrostatic. The types of electrostatic drives and their propellants:
Gridded ion thruster – using positive ions as the propellant, accelerated by an electrically charged grid
NASA Solar Technology Application Readiness (NSTAR) – positive ions accelerated using high-voltage electrodes
HiPEP – using positive ions as the propellant, created using microwaves
Radiofrequency ion thruster – generalization of HiPEP
Hall-effect thruster, including its subtypes Stationary Plasma Thruster (SPT) and Thruster with Anode Layer (TAL) – use the Hall effect to orient electrons to create positive ions for propellant
Colloid ion thruster – electrostatic acceleration of droplets of liquid salt as the propellant
Field-emission electric propulsion – using electrodes to accelerate ionized liquid metal as a propellant
Nano-particle field extraction thruster – using charged cylindrical carbon nanotubes as propellant
Electrothermal
These are engines that use electromagnetic fields to generate a plasma which is used as the propellant. They use a nozzle to direct the energized propellant. The nozzle itself may be composed simply of a magnetic field. Low molecular weight gases (e.g. hydrogen, helium, ammonia) are preferred propellants for this kind of system.
Resistojet – using a usually inert compressed propellant that is energized by simple resistive heating
Arcjet – uses (usually) hydrazine or ammonia as a propellant which is energized with an electrical arc
Microwave – a type of Radiofrequency ion thruster
Variable specific impulse magnetoplasma rocket (VASIMR) – using microwave-generated plasma as the propellant and magnetic field to direct its expulsion
Electromagnetic
Electromagnetic thrusters use ions as the propellant, which are accelerated by the Lorentz force or by magnetic fields, either of which is generated by electricity:
Electrodeless plasma thruster – a complex system that uses cold plasma as a propellant that is accelerated by ponderomotive force
Magnetoplasmadynamic thruster – propellants include xenon, neon, argon, hydrogen, hydrazine, or lithium; expelled using the Lorentz force
Pulsed inductive thruster – because this reactive engine uses a radial magnetic field, it acts on both positive and negative particles and so it may use a wide range of gases as a propellant including water, hydrazine, ammonia, argon, xenon and many others
Pulsed plasma thruster – uses a Teflon plasma as a propellant, which is created by an electrical arc and expelled using the Lorentz force
Helicon Double Layer Thruster – a plasma propellant is generated and excited from a gas using a helicon induced by high frequency band radiowaves which form a magnetic nozzle in a cylinder
Nuclear
Nuclear reactions may be used to produce the energy for the expulsion of the propellants. Many types of nuclear reactors have been used/proposed to produce electricity for electrical propulsion as outlined above. Nuclear pulse propulsion uses a series of nuclear explosions to create large amounts of energy to expel the products of the nuclear reaction as the propellant. Nuclear thermal rockets use the heat of a nuclear reaction to heat a propellant. Usually the propellant is hydrogen because the force is a function of the energy irrespective of the mass of the propellant, so the lightest propellant (hydrogen) produces the greatest specific impulse.
Photonic
A photonic reactive engine uses photons as the propellant and their discrete relativistic energy to produce thrust.
Projectile propellants
Compressed fluid propellants
Compressed fluid or compressed gas propellants are pressurized physically, by a compressor, rather than by a chemical reaction. The pressures and energy densities that can be achieved, while insufficient for high-performance rocketry and firearms, are adequate for most applications, in which case compressed fluids offer a simpler, safer, and more practical source of propellant pressure.
A compressed fluid propellant may simply be a pressurized gas, or a substance which is a gas at atmospheric pressure, but stored under pressure as a liquid.
Compressed gas propellants
In applications in which a large quantity of propellant is used, such as pressure washing and airbrushing, air may be pressurized by a compressor and used immediately. Additionally, a hand pump to compress air can be used for its simplicity in low-tech applications such as atomizers, plant misters and water rockets. The simplest examples of such a system are squeeze bottles for such liquids as ketchup and shampoo.
However, compressed gases are impractical as stored propellants if they do not liquify inside the storage container, because very high pressures are required in order to store any significant quantity of gas, and high-pressure gas cylinders and pressure regulators are expensive and heavy.
Liquified gas propellants
Principle
Liquefied gas propellants are gases at atmospheric pressure, but become liquid at a modest pressure. This pressure is high enough to provide useful propulsion of the payload (e.g. aerosol paint, deodorant, lubricant), but is low enough to be stored in an inexpensive metal can, and to not pose a safety hazard in case the can is ruptured.
The mixture of liquid and gaseous propellant inside the can maintains a constant pressure, called the liquid's vapor pressure. As the payload is depleted, the propellant vaporizes to fill the internal volume of the can. Liquids are typically 500-1000x denser than their corresponding gases at atmospheric pressure; even at the higher pressure inside the can, only a small fraction of its volume needs to be propellant in order to eject the payload and replace it with vapor.
Vaporizing the liquid propellant to gas requires some energy, the enthalpy of vaporization, which cools the system. This is usually insignificant, although it can sometimes be an unwanted effect of heavy usage (as the system cools, the vapor pressure of the propellant drops). However, in the case of a freeze spray, this cooling contributes to the desired effect (although freeze sprays may also contain other components, such as chloroethane, with a lower vapor pressure but higher enthalpy of vaporization than the propellant).
Propellant compounds
Chlorofluorocarbons (CFCs) were once often used as propellants, but since the Montreal Protocol came into force in 1989, they have been replaced in nearly every country due to the negative effects CFCs have on Earth's ozone layer. The most common replacements of CFCs are mixtures of volatile hydrocarbons, typically propane, n-butane and isobutane. Dimethyl ether (DME) and methyl ethyl ether are also used. All these have the disadvantage of being flammable. Nitrous oxide and carbon dioxide are also used as propellants to deliver foodstuffs (for example, whipped cream and cooking spray). Medicinal aerosols such as asthma inhalers use hydrofluoroalkanes (HFA): either HFA 134a (1,1,1,2,-tetrafluoroethane) or HFA 227 (1,1,1,2,3,3,3-heptafluoropropane) or combinations of the two. More recently, liquid hydrofluoroolefin (HFO) propellants have become more widely adopted in aerosol systems due to their relatively low vapor pressure, low global warming potential (GWP), and nonflammability.
Payloads
The practicality of liquified gas propellants allows for a broad variety of payloads. Aerosol sprays, in which a liquid is ejected as a spray, include paints, lubricants, degreasers, and protective coatings; deodorants and other personal care products; cooking oils. Some liquid payloads are not sprayed due to lower propellant pressure and/or viscous payload, as with whipped cream and shaving cream or shaving gel. Low-power guns, such as BB guns, paintball guns, and airsoft guns, have solid projectile payloads. Uniquely, in the case of a gas duster ("canned air"), the only payload is the velocity of the propellant vapor itself.
See also
Cartridge (firearms)
Explosive material
Fuel
Propellant depot
Spacecraft propulsion
Specific impulse
Tubes and primers for ammunition
References
Bibliography
External links
Rocket Propellants
Rocket propulsion elements, Sutton, George.P, Biblarz, Oscar 7th Ed
Understanding and Predicting Gun Barrel Erosion – Weapons Systems Division Defence Science and Technology Organisation by Ian A. Johnston
ARMAMENT RESEARCH DEVELOPMENT AND ENGINEERING CENTER - Enhanced Propellant and Cartridge Case Designs
Ammunition
Ballistics
Pyrotechnics
Industrial gases | Propellant | Physics,Chemistry | 4,656 |
38,731,429 | https://en.wikipedia.org/wiki/Snub%20order-6%20square%20tiling | In geometry, the snub order-6 square tiling is a uniform tiling of the hyperbolic plane. It has Schläfli symbol of s{(4,4,3)} or s{4,6}.
Images
Symmetry
The symmetry is doubled as a snub order-6 square tiling, with only one color of square. It has Schläfli symbol of s{4,6}.
Related polyhedra and tiling
The vertex figure 3.3.3.4.3.4 does not uniquely generate a uniform hyperbolic tiling. Another with quadrilateral fundamental domain (3 2 2 2) and 2*32 symmetry is generated by :
See also
Square tiling
Uniform tilings in hyperbolic plane
List of regular polytopes
Footnotes
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
External links
Hyperbolic and Spherical Tiling Gallery
KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings
Hyperbolic Planar Tessellations, Don Hatch
Hyperbolic tilings
Isogonal tilings
Snub tilings
Uniform tilings | Snub order-6 square tiling | Physics | 261 |
25,335,695 | https://en.wikipedia.org/wiki/Perceptual%20learning | Perceptual learning is learning better perception skills such as differentiating two musical tones from one another or categorizations of spatial and temporal patterns relevant to real-world expertise. Examples of this may include reading, seeing relations among chess pieces, and knowing whether or not an X-ray image shows a tumor.
Sensory modalities may include visual, auditory, tactile, olfactory, and taste. Perceptual learning forms important foundations of complex cognitive processes (i.e., language) and interacts with other kinds of learning to produce perceptual expertise. Underlying perceptual learning are changes in the neural circuitry. The ability for perceptual learning is retained throughout life.
Basic sensory discrimination
Laboratory studies reported many examples of dramatic improvements in sensitivities from appropriately structured perceptual learning tasks. In visual Vernier acuity tasks, observers judge whether one line is displaced above or below a second line. Untrained observers are often already very good with this task, but after training, observers' threshold has been shown to improve as much as 6 fold. Similar improvements have been found for visual motion discrimination and orientation sensitivity.
In visual search tasks, observers are asked to find a target object hidden among distractors or in noise. Studies of perceptual learning with visual search show that experience leads to great gains in sensitivity and speed. In one study by Karni and Sagi, the time it took for subjects to search for an oblique line among a field of horizontal lines was found to improve dramatically, from about 200ms in one session to about 50ms in a later session. With appropriate practice, visual search can become automatic and very efficient, such that observers do not need more time to search when there are more items present on the search field. Tactile perceptual learning has been demonstrated on spatial acuity tasks such as tactile grating orientation discrimination, and on vibrotactile perceptual tasks such as frequency discrimination; tactile learning on these tasks has been found to transfer from trained to untrained fingers. Practice with Braille reading and daily reliance on the sense of touch may underlie the enhancement in tactile spatial acuity of blind compared to sighted individuals.
In the natural world
Perceptual learning is prevalent and occurs continuously in everyday life. "Experience shapes the way people see and hear." Experience provides the sensory input to our perceptions as well as knowledge about identities. When people are less knowledgeable about different races and cultures people develop stereotypes because they are less knowledgeable. Perceptual learning is a more in-depth relationship between experience and perception. Different perceptions of the same sensory input may arise in individuals with different experiences or training. This leads to important issues about the ontology of sensory experience, the relationship between cognition and perception.
An example of this is money. Every day we look at money and we can look at it and know what it is but when you are asked to find the correct coin in similar coins that have slight differences we may have a problem finding the difference. This is because we see it every day but we are not directly trying to find a difference. Learning to perceive differences and similarities among stimuli based on exposure to the stimuli. A study conducted by Gibson's in 1955 illustrates how exposure to stimuli can affect how well we learn details for different stimuli.
As our perceptual system adapts to the natural world, we become better at discriminating between different stimuli when they belong to different categories than when they belong to the same category. We also tend to become less sensitive to the differences between two instances of the same category. These effects are described as the result of categorical perception. Categorical perception effects do not transfer across domains.
Infants, when different sounds belong to the same phonetic category in their native language, tend to lose sensitivity to differences between speech sounds by 10 months of age. They learn to pay attention to salient differences between native phonetic categories, and ignore the less language-relevant ones. In chess, expert chess players encode larger chunks of positions and relations on the board and require fewer exposures to fully recreate a chess board. This is not due to their possessing superior visual skill, but rather to their advanced extraction of structural patterns specific to chess.
When a woman has a baby, shortly after the baby's birth she will be able to decipher the difference in her baby's cry. This is because she is becoming more sensitive to the differences. She can tell what cry is because they are hungry, need to be changed, etc.
Extensive practice reading in English leads to extraction and rapid processing of the structural regularities of English spelling patterns. The word superiority effect demonstrates this—people are often much faster at recognizing words than individual letters.
In speech phonemes, observers who listen to a continuum of equally spaced consonant-vowel syllables going from /be/ to /de/ are much quicker to indicate that two syllables are different when they belonged to different phonemic categories than when they were two variants of the same phoneme, even when physical differences were equated between each pair of syllables.
Other examples of perceptual learning in the natural world include the ability to distinguish between relative pitches in music, identify tumors in x-rays, sort day-old chicks by gender, taste the subtle differences between beers or wines, identify faces as belonging to different races, detect the features that distinguish familiar faces, discriminate between two bird species ("great blue crown heron" and "chipping sparrow"), and attend selectively to the hue, saturation and brightness values that comprise a color definition.
Brief history
The prevalent idiom that “practice makes perfect” captures the essence of the ability to reach impressive perceptual expertise. This has been demonstrated for centuries and through extensive amounts of practice in skills such as wine tasting, fabric evaluation, or musical preference. The first documented report, dating to the mid-19th century, is the earliest example of tactile training aimed at decreasing the minimal distance at which individuals can discriminate whether one or two points on their skin have been touched. It was found that this distance (JND, Just Noticeable Difference) decreases dramatically with practice, and that this improvement is at least partially retained on subsequent days. Moreover, this improvement is at least partially specific to the trained skin area. A particularly dramatic improvement was found for skin positions at which initial discrimination was very crude (e.g. on the back), though training could not bring the JND of initially crude areas down to that of initially accurate ones (e.g. finger tips). William James devoted a section in his Principles of Psychology (1890/1950) to "the improvement in discrimination by practice". He noted examples and emphasized the importance of perceptual learning for expertise. In 1918, Clark L. Hull, a noted learning theorist, trained human participants to learn to categorize deformed Chinese characters into categories. For each category, he used 6 instances that shared some invariant structural property. People learned to associate a sound as the name of each category, and more importantly, they were able to classify novel characters accurately. This ability to extract invariances from instances and apply them to classify new instances marked this study as a perceptual learning experiment. It was not until 1969, however, that Eleanor Gibson published her seminal book The Principles of Perceptual learning and Development and defined the modern field of perceptual learning. She established the study of perceptual learning as an inquiry into the behavior and mechanism of perceptual change. By the mid-1970s, however, this area was in a state of dormancy due to a shift in focus to perceptual and cognitive development in infancy. Much of the scientific community tended to underestimate the impact of learning compared with innate mechanisms. Thus, most of this research focused on characterizing basic perceptual capacities of young infants rather than on perceptual learning processes.
Since the mid-1980s, there has been a new wave of interest in perceptual learning due to findings of cortical plasticity at the lowest sensory levels of sensory systems. Our increased understanding of the physiology and anatomy of our cortical systems has been used to connect the behavioral improvement to the underlying cortical areas. This trend began with earlier findings of Hubel and Wiesel that perceptual representations at sensory areas of the cortex are substantially modified during a short ("critical") period immediately following birth. Merzenich, Kaas and colleagues showed that though neuroplasticity is diminished, it is not eliminated when the critical period ends. Thus, when the external pattern of stimulation is substantially modified, neuronal representations in lower-level (e.g. primary) sensory areas are also modified. Research in this period centered on basic sensory discriminations, where remarkable improvements were found on almost any sensory task through discrimination practice. Following training, subjects were tested with novel conditions and learning transfer was assessed. This work departed from earlier work on perceptual learning, which spanned different tasks and levels.
A question still debated today is to what extent improvements from perceptual learning stems from peripheral modifications compared with improvement in higher-level readout stages. Early interpretations, such as that suggested by William James, attributed it to higher-level categorization mechanisms whereby initially blurred differences are gradually associated with distinctively different labels. The work focused on basic sensory discrimination, however, suggests that the effects of perceptual learning are specific to changes in low-levels of the sensory nervous system (i.e., primary sensory cortices). More recently, research suggest that perceptual learning processes are multilevel and flexible. This cycles back to the earlier Gibsonian view that low-level learning effects are modulated by high-level factors, and suggests that improvement in information extraction may not involve only low-level sensory coding but also apprehension of relatively abstract structure and relations in time and space.
Within the past decade, researchers have sought a more unified understanding of perceptual learning and worked to apply these principles to improve perceptual learning in applied domains.
Characteristics
Discovery and fluency effects
Perceptual learning effects can be organized into two broad categories: discovery effects and fluency effects. Discovery effects involve some change in the bases of response such as in selecting new information relevant for the task, amplifying relevant information or suppressing irrelevant information. Experts extract larger "chunks" of information and discover high-order relations and structures in their domains of expertise that are invisible to novices. Fluency effects involve changes in the ease of extraction. Not only can experts process high-order information, they do so with great speed and low attentional load. Discovery and fluency effects work together so that as the discovery structures becomes more automatic, attentional resources are conserved for discovery of new relations and for high-level thinking and problem-solving.
The role of attention
William James (Principles of Psychology, 1890) asserted that "My experience is what I agree to attend to. Only those items which I notice shape my mind - without selective interest, experience is an utter chaos.". His view was extreme, yet its gist was largely supported by subsequent behavioral and physiological studies. Mere exposure does not seem to suffice for acquiring expertise.
Indeed, a relevant signal in a given behavioral condition may be considered noise in another. For example, when presented with two similar stimuli, one might endeavor to study the differences between their representations in order to improve one's ability to discriminate between them, or one may instead concentrate on the similarities to improve one's ability to identify both as belonging to the same category. A specific difference between them could be considered 'signal' in the first case and 'noise' in the second case. Thus, as we adapt to tasks and environments, we pay increasingly more attention to the perceptual features that are relevant and important for the task at hand, and at the same time, less attention to the irrelevant features. This mechanism is called attentional weighting.
However, recent studies suggest that perceptual learning occurs without selective attention. Studies of such task-irrelevant perceptual learning (TIPL) show that the degree of TIPL is similar to that found through direct training procedures. TIPL for a stimulus depends on the relationship between that stimulus and important task events or upon stimulus reward contingencies. It has thus been suggested that learning (of task irrelevant stimuli) is contingent upon spatially diffusive learning signals. Similar effects, but upon a shorter time scale, have been found for memory processes and in some cases is called attentional boosting. Thus, when an important (alerting) event occurs, learning may also affect concurrent, non-attended and non-salient stimuli.
Time course of perceptual learning
The time course of perceptual learning varies from one participant to another. Perceptual learning occurs not only within the first training session but also between sessions. Fast learning (i.e., within-first-session learning) and slow learning (i.e., between-session learning) involves different changes in the human adult brain. While the fast learning effects can only be retained for a short term of several days, the slow learning effects can be preserved for a long term over several months.
Explanations and models
Receptive field modification
Research on basic sensory discriminations often show that perceptual learning effects are specific to the trained task or stimulus. Many researchers take this to suggest that perceptual learning may work by modifying the receptive fields of the cells (e.g., V1 and V2 cells) that initially encode the stimulus. For example, individual cells could adapt to become more sensitive to important features, effectively recruiting more cells for a particular purpose, making some cells more specifically tuned for the task at hand. Evidence for receptive field change has been found using single-cell recording techniques in primates in both tactile and auditory domains.
However, not all perceptual learning tasks are specific to the trained stimuli or tasks. Sireteanu and Rettenback discussed discrimination learning effects that generalize across eyes, retinal locations and tasks. Ahissar and Hochstein used visual search to show that learning to detect a single line element hidden in an array of differently-oriented line segments could generalize to positions at which the target was never presented. In human vision, not enough receptive field modification has been found in early visual areas to explain perceptual learning. Training that produces large behavioral changes such as improvements in discrimination does not produce changes in receptive fields. In studies where changes have been found, the changes are too small to explain changes in behavior.
Reverse hierarchy theory
The Reverse Hierarchy Theory (RHT), proposed by Ahissar & Hochstein, aims to link between learning dynamics and specificity and the underlying neuronal sites. RHT proposes that naïve performance is based on responses at high-level cortical areas, where crude, categorical level representations of the environment are represented. Hence initial learning stages involve understanding global aspects of the task. Subsequent practice may yield better perceptual resolution as a consequence of accessing lower-level information via the feedback connections going from high to low levels. Accessing the relevant low-level representations requires a backward search during which informative input populations of neurons in the low level are allocated. Hence, subsequent learning and its specificity reflect the resolution of lower levels. RHT thus proposes that initial performance is limited by the high-level resolution whereas post-training performance is limited by the resolution at low levels. Since high-level representations of different individuals differ due to their prior experience, their initial learning patterns may differ. Several imaging studies are in line with this interpretation, finding that initial performance is correlated with average (BOLD) responses at higher-level areas whereas subsequent performance is more correlated with activity at lower-level areas. RHT proposes that modifications at low levels will occur only when the backward search (from high to low levels of processing) is successful. Such success requires that the backward search will "know" which neurons in the lower level are informative. This "knowledge" is gained by training repeatedly on a limited set of stimuli, such that the same lower-level neuronal populations are informative during several trials. Recent studies found that mixing a broad range of stimuli may also yield effective learning if these stimuli are clearly perceived as different, or are explicitly tagged as different. These findings further support the requirement for top-down guidance in order to obtain effective learning.
Enrichment versus differentiation
In some complex perceptual tasks, all humans are experts. We are all very sophisticated, but not infallible at scene identification, face identification and speech perception. Traditional explanations attribute this expertise to some holistic, somewhat specialized, mechanisms. Perhaps such quick identifications are achieved by more specific and complex perceptual detectors which gradually "chunk" (i.e., unitize) features that tend to concur, making it easier to pull a whole set of information. Whether any concurrence of features can gradually be chunked with practice or chunking can only be obtained with some pre-disposition (e.g. faces, phonological categories) is an open question. Current findings suggest that such expertise is correlated with a significant increase in the cortical volume involved in these processes. Thus, we all have somewhat specialized face areas, which may reveal an innate property, but we also develop somewhat specialized areas for written words as opposed to single letters or strings of letter-like symbols. Moreover, special experts in a given domain have larger cortical areas involved in that domain. Thus, expert musicians have larger auditory areas. These observations are in line with traditional theories of enrichment proposing that improved performance involves an increase in cortical representation. For this expertise, basic categorical identification may be based on enriched and detailed representations, located to some extent in specialized brain areas. Physiological evidence suggests that training for refined discrimination along basic dimensions (e.g. frequency in the auditory modality) also increases the representation of the trained parameters, though in these cases the increase may mainly involve lower-level sensory areas.
Selective reweighting
In 2005, Petrov, Dosher and Lu pointed out that perceptual learning may be explained in terms of the selection of which analyzers best perform the classification, even in simple discrimination tasks. They explain that the some part of the neural system responsible for particular decisions have specificity, while low-level perceptual units do not. In their model, encodings at the lowest level do not change. Rather, changes that occur in perceptual learning arise from changes in higher-level, abstract representations of the relevant stimuli. Because specificity can come from differentially selecting information, this "selective reweighting theory" allows for learning of complex, abstract representation. This corresponds to Gibson's earlier account of perceptual learning as selection and learning of distinguishing features. Selection may be the unifying principles of perceptual learning at all levels.
The impact of training protocol and the dynamics of learning
Ivan Pavlov discovered conditioning. He found that when a stimulus (e.g. sound) is immediately followed by food several times, the mere presentation of this stimulus would subsequently elicit saliva in a dog's mouth. He further found that when he used a differential protocol, by consistently presenting food after one stimulus while not presenting food after another stimulus, dogs were quickly conditioned to selectively salivate in response to the rewarded one. He then asked whether this protocol could be used to increase perceptual discrimination, by differentially rewarding two very similar stimuli (e.g. tones with similar frequency). However, he found that differential conditioning was not effective.
Pavlov's studies were followed by many training studies which found that an effective way to increase perceptual resolution is to begin with a large difference along the required dimension and gradually proceed to small differences along this dimension. This easy-to-difficult transfer was termed "transfer along a continuum".
These studies showed that the dynamics of learning depend on the training protocol, rather than on the total amount of practice. Moreover, it seems that the strategy implicitly chosen for learning is highly sensitive to the choice of the first few trials during which the system tries to identify the relevant cues.
Consolidation and sleep
Several studies asked whether learning takes place during practice sessions or in between, for example, during subsequent sleep. The dynamics of learning are hard to evaluate since the directly measured parameter is performance, which is affected by both learning, inducing improvement, and fatigue, which hampers performance. Current studies suggest that sleep contributes to improved and durable learning effects, by further strengthening connections in the absence of continued practice. Both slow-wave and REM (rapid eye movement) stages of sleep may contribute to this process, via not-yet-understood mechanisms.
Comparison and contrast
Practice with comparison and contrast of instances that belong to the same or different categories allow for the pick-up of the distinguishing features—features that are important for the classification task—and the filter of the irrelevant features.
Task difficulty
Learning easy examples first may lead to better transfer and better learning of more difficult cases.
By recording ERPs from human adults, Ding and Colleagues investigated the influence of task difficulty on the brain mechanisms of visual perceptual learning. Results showed that difficult task training affected earlier visual processing stage and broader visual cortical regions than easy task training.
Active classification and attention
Active classification effort and attention are often necessary to produce perceptual learning effects. However, in some cases, mere exposure to certain stimulus variations can produce improved discriminations.
Feedback
In many cases, perceptual learning does not require feedback (whether or not the classification is correct). Other studies suggest that block feedback (feedback only after a block of trials) produces more learning effects than no feedback at all.
Limits
Despite the marked perceptual learning demonstrated in different sensory systems and under varied training paradigms, it is clear that perceptual learning must face certain unsurpassable limits imposed by the physical characteristics of the sensory system. For instance, in tactile spatial acuity tasks, experiments suggest that the extent of learning is limited by fingertip surface area, which may constrain the underlying density of mechanoreceptors.
Relations to other forms of learning
Declarative & procedural learning
In many domains of expertise in the real world, perceptual learning interacts with other forms of learning. Declarative knowledge tends to occur with perceptual learning. As we learn to distinguish between an array of wine flavors, we also develop a wide range of vocabularies to describe the intricacy of each flavor.
Similarly, perceptual learning also interacts flexibly with procedural knowledge. For example, the perceptual expertise of a baseball player at bat can detect early in the ball's flight whether the pitcher threw a curveball. However, the perceptual differentiation of the feel of swinging the bat in various ways may also have been involved in learning the motor commands that produce the required swing.
Implicit learning
Perceptual learning is often said to be implicit, such that learning occurs without awareness. It is not at all clear whether perceptual learning is always implicit. Changes in sensitivity that arise are often not conscious and do not involve conscious procedures, but perceptual information can be mapped onto various responses.
In complex perceptual learning tasks (e.g., sorting of newborn chicks by sex, playing chess), experts are often unable to explain what stimulus relationships they are using in classification. However, in less complex perceptual learning tasks, people can point out what information they're using to make classifications.
Category learning vs. perceptual learning
Perceptual learning is distinguished from category learning. Perceptual learning generally refers to the enhancement of detectability of a perceptual item or the discriminability between two or more items. In contrast, category learning involves labeling or categorizing an item into a particular group or category. However, in some cases, there is an overlap between perceptual learning and category learning. For instance, to discriminate between two items, a categorical difference between them may sometimes be utilized, in which case category learning, rather than perceptual learning, is thought to occur. Although perceptual learning and category learning are distinct forms of learning, they can interact. For example, category learning that groups multiple orientations into different categories can lead perceptual learning of one orientation to transfer across other orientations within the same category as the trained orientation. This is termed "category-induced perceptual learning".
Neuropsychology of perceptual category learning
Multiple different category learning systems may mediate the learning of different category structures. "Two systems that have received support are a frontal-based explicit system that uses logical reasoning, depends on working memory and executive attention, and is mediated primarily by the anterior cingulate, the prefrontal cortex and the associative striatum, including the head of the caudate. The second is a basal ganglia-mediated implicit system that uses procedural learning, requires a dopamine reward signal and is mediated primarily by the sensorimotor striatum" The studies showed that there was significant involvement of the striatum and less involvement of the medial temporal lobes in category learning. In people who have striatal damage, the need to ignore irrelevant information is more predictive of a rule-based category learning deficit. Whereas, the complexity of the rule is predictive of an information integration category learning deficit.
Applications
Improving perceptual skills
An important potential application of perceptual learning is the acquisition of skill for practical purposes. Thus it is important to understand whether training for increased resolution in lab conditions induces a general upgrade which transfers to other environmental contexts, or results from mechanisms which are context specific. Improving complex skills is typically gained by training under complex simulation conditions rather than one component at a time. Recent lab-based training protocols with complex action computer games have shown that such practice indeed modifies visual skills in a general way, which transfers to new visual contexts. In 2010, Achtman, Green, and Bavelier reviewed the research on video games to train visual skills. They cite a previous review by Green & Bavelier (2006) on using video games to enhance perceptual and cognitive abilities. A variety of skills were upgraded in video game players, including "improved hand-eye coordination, increased processing in the periphery, enhanced mental rotation skills, greater divided attention abilities, and faster reaction times, to name a few". An important characteristic is the functional increase in the size of the effective visual field (within which viewers can identify objects), which is trained in action games and transfers to new settings. Whether learning of simple discriminations, which are trained in separation, transfers to new stimulus contexts (e.g. complex stimulus conditions) is still an open question.
Like experimental procedures, other attempts to apply perceptual learning methods to basic and complex skills use training situations in which the learner receives many short classification trials. Tallal, Merzenich and their colleagues have successfully adapted auditory discrimination paradigms to address speech and language difficulties. They reported improvements in language learning-impaired children using specially enhanced and extended speech signals. The results applied not only to auditory discrimination performance but speech and language comprehension as well.
Technologies for perceptual learning
In educational domains, recent efforts by Philip Kellman and colleagues showed that perceptual learning can be systematically produced and accelerated using specific, computer-based technology. Their approach to perceptual learning methods take the form of perceptual learning modules (PLMs): sets of short, interactive trials that develop, in a particular domain, learners' pattern recognition, classification abilities, and their abilities to map across multiple representations. As a result of practice with mapping across transformations (e.g., algebra, fractions) and across multiple representations (e.g., graphs, equations, and word problems), students show dramatic gains in their structure recognition in fraction learning and algebra. They also demonstrated that when students practice classifying algebraic transformations using PLMs, the results show remarkable improvements in fluency at algebra problem solving. These results suggests that perceptual learning can offer a needed complement to conceptual and procedural instructions in the classroom.
Similar results have also been replicated in other domains with PLMs, including anatomic recognition in medical and surgical training, reading instrumental flight displays, and apprehending molecular structures in chemistry.
See also
Adaptation
Categorical perception
Category learning
Cognitive development
Educational psychology
Eureka effect
Implicit learning
Neuroplasticity
Pattern recognition
References
Behavioral concepts
Learning
Perception
Sources of knowledge | Perceptual learning | Biology | 5,941 |
18,084,359 | https://en.wikipedia.org/wiki/Bell%20nipple | A Bell nipple is a section of large diameter pipe fitted to the top of the blowout preventers that the flow line attaches to via a side outlet, to allow the drilling fluid to flow back over the shale shakers to the mud tanks.
See Drilling rig (petroleum) for a diagram.
Oilfield terminology
Drilling technology
Petroleum engineering | Bell nipple | Chemistry,Engineering | 69 |
4,417,192 | https://en.wikipedia.org/wiki/Hemopoietic%20growth%20factor | Hemopoietic growth factors regulate the differentiation and proliferation of particular progenitor cells. Made available through recombinant DNA technology, they hold tremendous potential for medical uses when a person's natural ability to form blood cells is diminished or defective. Recombinant erythropoietin (EPO) is very effective in treating the diminished red blood cell production that accompanies end-stage kidney disease. Erythropoietin is a sialoglycoprotein hormone produced by peritubular cells of kidney.
Granulocyte-macrophage colony-stimulating factor and granulocyte CSF are given to stimulate white blood cell formation in cancer patients who are receiving chemotherapy, which tends to kill their red bone marrow cells as well as the cancer cells. Thrombopoietin shows great promise for preventing platelet depletion during chemotherapy. CSFs and thrombopoietin also improve the outcome of patients who receive bone marrow transplants.
Types
Erythropoietin is a glycoprotein hormone secreted by the interstitial fibroblast cells of the kidneys in response to low oxygen levels. It prompts the production of erythrocytes.
Thrombopoietin, another glycoprotein hormone, is produced by the liver and kidneys. It triggers the development of megakaryocytes into platelets.
Cytokines are glycoproteins secreted by a wide variety of cells, including red bone marrow, leukocytes, macrophages, fibroblasts, and endothelial cells. They act locally as autocrine or paracrine factors, stimulating the proliferation of progenitor cells and helping to stimulate both nonspecific and specific resistance to disease. There are two major subtypes of cytokines known as colony-stimulating factors and interleukins.
Colony-stimulating factors are glycoproteins that act locally, as autocrine or paracrine factors. Some trigger the differentiation of myeloblasts into granular leukocytes, namely, neutrophils, eosinophils, and basophils. These are referred to as granulocyte CSFs. A different CSF induces the production of monocytes, called monocyte CSFs. Both granulocytes and monocytes are stimulated by GM-CSF; granulocytes, monocytes, platelets, and erythrocytes are stimulated by multi-CSF.
Interleukins are another class of cytokine signaling molecules important in hemopoiesis. They were initially thought to be secreted uniquely by leukocytes and to communicate only with other leukocytes, and were named accordingly, but are now known to be produced by a variety of cells including bone marrow and endothelium. Researchers now suspect that interleukins may play other roles in body functioning, including differentiation and maturation of cells, producing immunity and inflammation. To date, more than a dozen interleukins have been identified, with others likely to follow. They are generally numbered IL-1, IL-2, IL-3, etc.
Clinical implications
Some athletes use synthetic erythropoetin as a performance-enhancing drug to increase RBC counts and subsequently increase oxygen delivery to tissues throughout the body. Erythropoietin is a banned substance in most organized sports, but it is also used medically in the treatment of certain anemia, specifically those triggered by certain types of cancer, and other disorders in which increased erythrocyte counts and oxygen levels are desirable.
Synthetic forms of colony stimulating factors are often administered to patients with various forms of cancer who are receiving chemotherapy to revive their WBC counts.
References
Hematopoiesis
Growth factors | Hemopoietic growth factor | Chemistry | 790 |
6,833,771 | https://en.wikipedia.org/wiki/Cilazapril | Cilazapril is an angiotensin-converting enzyme inhibitor (ACE inhibitor) used for the treatment of hypertension and congestive heart failure.
It was patented in 1982 and approved for medical use in 1990.
Chemistry
Of the eight possible stereoisomers, only the all-(S)-form is medically viable.
Brand names
It is branded as Dynorm, Inhibace, Vascace and many other names in various countries. None of these are available in the United States as of May 2010.
References
ACE inhibitors
Carboxylic acids
Enantiopure drugs
Drugs developed by Hoffmann-La Roche
Ethyl esters
Lactams
Prodrugs
Nitrogen heterocycles
Heterocyclic compounds with 2 rings
Carboxylate esters | Cilazapril | Chemistry | 164 |
77,195,423 | https://en.wikipedia.org/wiki/Federal%20Office%20for%20the%20Safety%20of%20Nuclear%20Waste%20Management | The Federal Office for the Safety of Nuclear Waste Management (BASE) is a legally established, independent German federal authority under the jurisdiction of the Federal Ministry for the Environment, Nature Conservation, Nuclear Safety and Consumer Protection (BMUV). It began its activities on 1 September 2014. Its provisional headquarters is Berlin. Other offices are located in Salzgitter and Bonn. The president is Christian Kühn.
History
After the nuclear phase-out of the Merkel/Westerwelle government, the governing parties, together with the opposition parties SPD and Greens, decided to pass a new law to search for a permanent repository. In May 2013, the four parliamentary groups introduced the Draft Act on the Search for and Selection of a Site for a Final Repository for Heat-Generating Radioactive Waste and on the Amendment of Other Laws (Site Selection Act – StandAG). This draft Article Act contained an Article 3 Act on the Establishment of a Federal Office for Nuclear Waste Management (, BfkEG) with only three paragraphs. In the course of the legislative process, Section 1 of the Act was supplemented and transitional provisions were added with Section 4. The BfkEG came into force on January 1, 2014, meaning that the Federal Office for Nuclear Waste Management was formally founded on that day.
Article 4, number 1 of the Act on the Reorganization of the Organizational Structure in the Area of Final Storage (NeuOrgG) renamed the authority the Federal Office for Nuclear Waste Management Safety (, BfE) on July 30, 2016. The reason for the renaming was the intention to more clearly distinguish from the Bundesgesellschaft für Endlagerung (BGE). At the same time, the BfE was given key tasks from the Federal Office for Radiation Protection in the field of nuclear safety and nuclear waste disposal safety. For this reason, the key tasks of the BfE (now BASE) are carried out at the Salzgitter headquarters, where a large part of the staff is also based.
From its founding in 2014 to 2016, the Federal Office was temporarily headed by Ewold Seeba, who later became the chairman and managing director of the BGZ Society for Interim Storage. On August 1, 2016, Wolfram König was appointed as the new president.
On January 1, 2020, the agency was renamed from Bundesamt für kerntechnische Entsorgungssicherheit (BfE) to Bundesamt für die Sicherheit der nuklearen Entsorgung (BASE).
Wolfram König retired at the end of January 2024. On February 15, 2024, Christian Kühn became President of the Federal Office.
Organization
The Federal Office is under the supervision of the BMUV. It is headed by a president with a vice president as permanent representative. In addition to the presidential area, the BASE is divided into the following departments:
Department Z: Central Services
Department F: Research / International
Department B: Participation
Department A: Supervision
Department G: Approval Procedures
Department N: Nuclear Safety
The Participatory Administration Laboratory reports to the vice president. Its task is to develop innovative working methods in public administration and new participation processes between the state and society. As an innovation laboratory in the German administration, it is comparable to the BWI GmbH.
Tasks
The Federal Office for the Safety of Nuclear Waste Management is the central federal authority for the approval, supervision and regulation in the areas of final and intermediate storage as well as for the handling and transport of radioactive waste. The range of tasks of the BASE can be described in more detail based on its organizational structure.
Nuclear safety
Supervision of final storage facilities for heat-generating radioactive substances and the Asse II mine
Receipt and publication of information according to Section 7 Paragraph 1c of the German Atomic Energy Act (AtG)
Recording and documentation of all reportable events in nuclear facilities (Federal Incident Reporting Office)
Nuclear waste disposal safety
Approval of the transport of nuclear fuels in accordance with Section 4 AtG (so-called Castor transports) and large sources in accordance with Section 186 of the German Radiation Protection Act, as well as their withdrawal or revocation
State custody of nuclear fuels within the meaning of Section 5 AtG
Approval of the storage of nuclear fuels outside of state custody (so-called intermediate storage) in accordance with Section 6 AtG, as well as their withdrawal or revocation
Type approval of nuclear flasks of type C, B(U), B(M) and packages for fissile materials (CF, B(U)F, B(M)F, AF and IF)
Recognition of foreign type approvals of nuclear flasks
Site selection procedure
Determining exploration programs and test criteria in accordance with the StandAG
Examining the proposals of the project sponsor in accordance with Section 14 Paragraph 2, Section 16 Paragraph 3 and Section 18 Paragraph 3 StandAG
Supervision of the implementation of the site selection process according to Section 19 Paragraphs 1 to 4 StandAG
Site security according to Section 21 StandAG
Responsible for public participation in the site selection process
Approval and supervision of repositories
Plan approval and approval of repositories for high-level radioactive waste (Section 9b AtG)
Granting of mining permits and other required mining permits and approvals in approval procedures pursuant to Section 9b AtG for the construction, operation and decommissioning of federal facilities for the safekeeping and final storage pursuant to Section 9a Paragraph 3 AtG in consultation with the competent mining authority of the respective state
Mining supervision according to Sections 69 to 74 of the Federal Mining Act on federal facilities for the safekeeping and final storage according to Section 9a Paragraph 3 AtG as well as the
Issuing of water law permits or approvals in approval procedures according to Section 9b AtG for federal facilities for the safekeeping and final storage according to Section 9a Paragraph 3 AtG in consultation with the responsible water authority.
In addition, the BASE provides the BMUV with technical and scientific support within the scope of its responsibilities (Section 2 Paragraph 2 BfkEG) and in this respect carries out federal tasks which it is commissioned to carry out by the BMUV or, with its consent, by the supreme federal authority responsible for the subject matter (Section 2 Paragraph 3 BfkEG). Finally, the BASE is also responsible for sufficient research activities within the scope of its responsibilities (Section 2 Paragraph 4 BfkEG).
Management
Since February 15, 2024, Christian Kühn has headed the Federal Office. His office is classified in salary group B 8 of the Federal Salary Scale B. He has the official title President.
Criticism
Andreas Troge, former President of the German environmental agency, criticized the establishment of the agency with the previously envisaged powers in 2014 as an unnecessary duplicate organization to the Federal Office for Radiation protection.
External links
BASE official site
BASE information platform for the search for a final repository
References
Charlottenburg
Waste management companies of Germany
Nuclear energy policy in Germany
2014 establishments in Germany
Radioactive waste | Federal Office for the Safety of Nuclear Waste Management | Chemistry,Technology | 1,415 |
32,555,796 | https://en.wikipedia.org/wiki/COPASI | COPASI (COmplex PAthway SImulator) is an open-source software application for creating and solving mathematical models of biological processes such as metabolic networks, cell-signaling pathways, regulatory networks, infectious diseases, and many others.
History
COPASI is based on the Gepasi simulation software that was developed in the early 1990s by Pedro Mendes. The initial development of COPASI was funded by the Virginia Bioinformatics Institute, and the Klaus Tschira Foundation. Current development efforts are supported by grants from the National Institutes of Health, the BBSRC, and the German Ministry of Education.
Development team
COPASI is the result of an international collaboration between the University of Manchester (UK), the University of Heidelberg (Germany), and the Virginia Bioinformatics Institute (USA). The project principal investigators are Pedro Mendes and Ursula Kummer. The chief software architects are Stefan Hoops and Sven Sahle.
Features
COPASI includes features to define models of biological processes, simulate and analyze these models, generate analysis reports, and import/export models in SBML format.
Model definition: Models are defined as chemical reactions between molecular species. The dynamics of the model is determined by Rate law associated with individual reactions. Models can also include compartments, events, and other global variables that can help specify the dynamics of the system.
Tasks: Tasks are different types of analysis that can be performed on a model. They include steady-state analysis, stoichiometric analysis, time course simulation using deterministic and stochastic simulation algorithms, metabolic control analysis, computation of Lyapunov exponent, time scale separation, parameter scans, optimization, and parameter estimation.
Importing and Exporting: COPASI can read models in SBML format as well as in Gepasi format. COPASI can write models in several different formats including the SBML, source code in the C programming language, Berkeley Madonna files, and XPPAUT files.
See also
List of systems biology modeling software
References
External links
COPASI home page
Mendes group,
Department Modeling of Biological Processes, Heidelberg University
Systems biology
Ordinary differential equations
Department of Computer Science, University of Manchester
Software using the Artistic license | COPASI | Biology | 436 |
32,693,708 | https://en.wikipedia.org/wiki/Communications-based%20train%20control | Communications-based train control (CBTC) is a railway signaling system that uses telecommunications between the train and track equipment for traffic management and infrastructure control. CBTC allows a train's position to be known more accurately than with traditional signaling systems. This can make railway traffic management safer and more efficient. Rapid transit systems (and other railway systems) are able to reduce headways while maintaining or even improving safety.
A CBTC system is a "continuous, automatic train control system utilizing high-resolution train location determination, independent from track circuits; continuous, high-capacity, bidirectional train-to-wayside data communications; and trainborne and wayside processors capable of implementing automatic train protection (ATP) functions, as well as optional automatic train operation (ATO) and automatic train supervision (ATS) functions," as defined in the IEEE 1474 standard.
Background and origin
CBTC is a signalling standard defined by the IEEE 1474 standard. The original version was introduced in 1999 and updated in 2004. The aim was to create consistency and standardisation between digital railway signalling systems that allow for an increase in train capacity through what the standard defines as high-resolution train location determination. The standard therefore does not require the use of moving block railway signalling, but in practice this is the most common arrangement.
Moving block
Traditional signalling systems detect trains in discrete sections of the track called 'blocks', each protected by signals that prevent a train entering an occupied block. Since every block is a fixed section of track, these systems are referred to as fixed block systems.
In a moving block CBTC system the protected section for each train is a "block" that moves with and trails behind it, and provides continuous communication of the train's exact position via radio, inductive loop, etc.
As a result, Bombardier opened the world's first radio-based CBTC system at San Francisco airport's automated people mover (APM) in February 2003. A few months later, in June 2003, Alstom introduced the railway application of its radio technology on the Singapore North East Line. CBTC has its origins in the loop-based systems developed by Alcatel SEL (now Thales) for the Bombardier Automated Rapid Transit (ART) systems in Canada during the mid-1980s.
These systems, which were also referred to as transmission-based train control (TBTC), made use of inductive loop transmission techniques for track to train communication, introducing an alternative to track circuit based communication. This technology, operating in the 30–60 kHz frequency range to communicate trains and wayside equipment, was widely adopted by the metro operators in spite of some electromagnetic compatibility (EMC) issues, as well as other installation and maintenance concerns (see SelTrac for further information regarding transmission-based train-control).
As with new application of any technology, some problems arose at the beginning mainly due to compatibility and interoperability aspects. However, there have been relevant improvements since then, and currently the reliability of the radio-based communication systems has grown significantly.
Moreover, it is important to highlight that not all the systems using radio communication technology are considered to be CBTC systems. So, for clarity and to keep in line with the state-of-the-art solutions for operator's requirements, this article only covers the latest moving block principle based (either true moving block or virtual block, so not dependent on track-based detection of the trains) CBTC solutions that make use of the radio communications.
Main features
CBTC and moving block
CBTC systems are modern railway signaling systems that can mainly be used in urban railway lines (either light or heavy) and APMs, although it could also be deployed on commuter lines. For main lines, a similar system might be the European Railway Traffic Management System ERTMS Level 3 (not yet fully defined ).
In the modern CBTC systems the trains continuously calculate and communicate their status via radio to the wayside equipment distributed along the line. This status includes, among other parameters, the exact position, speed, travel direction and braking distance.
This information allows calculation of the area potentially occupied by the train on the track. It also enables the wayside equipment to define the points on the line that must never be passed by the other trains on the same track. These points are communicated to make the trains automatically and continuously adjust their speed while maintaining the safety and comfort (jerk) requirements. So, the trains continuously receive information regarding the distance to the preceding train and are then able to adjust their safety distance accordingly.
From the signalling system perspective, the first figure shows the total occupancy of the leading train by including the whole blocks which the train is located on. This is due to the fact that it is impossible for the system to know exactly where the train actually is within these blocks. Therefore, the fixed block system only allows the following train to move up to the last unoccupied block's border.
In a moving block system as shown in the second figure, the train position and its braking curve is continuously calculated by the trains, and then communicated via radio to the wayside equipment. Thus, the wayside equipment is able to establish protected areas, each one called Limit of Movement Authority (LMA), up to the nearest obstacle (in the figure the tail of the train in front). Movement Authority (MA) is the permission for a train to move to a specific location within the constraints of the infrastructure and with supervision of speed.
End of Authority is the location to which the train is permitted to proceed and where target speed is equal to zero. End of Movement is the location to which the train is permitted to proceed according to an MA. When transmitting an MA, it is the end of the last section given in the MA.
It is important to mention that the occupancy calculated in these systems must include a safety margin for location uncertainty (in yellow in the figure) added to the length of the train. Both of them form what is usually called 'Footprint'. This safety margin depends on the accuracy of the odometry system in the train.
CBTC systems based on moving block allows the reduction of the safety distance between two consecutive trains. This distance is varying according to the continuous updates of the train location and speed, maintaining the safety requirements. This results in a reduced headway between consecutive trains and an increased transport capacity.
Grades of automation
Modern CBTC systems allow different levels of automation or grades of automation (GoA), as defined and classified in the IEC 62290–1. In fact, CBTC is not a synonym for "driverless" or "automated trains" although it is considered as a basic enabler technology for this purpose.
There are four grades of automation available:
GoA 0 - On-sight, with no automation
GoA 1 - Manual, with a driver controlling all train operations.
GoA 2 - Semi-automatic Operation (STO), starting and stopping are automated, but a driver who sits in the cab operates the doors and drives in emergencies
GoA 3 - Driverless Train Operation (DTO), starting and stopping are automated, but a crew member operates the doors from within the train
GoA 4 - Unattended Train Operation (UTO), starting, stopping and doors are all automated, with no required crew member on board
Main applications
CBTC systems allow optimal use of the railway infrastructure as well as achieving maximum capacity and minimum headway between operating trains, while maintaining the safety requirements. These systems are suitable for the new highly demanding urban lines, but also to be overlaid on existing lines in order to improve their performance.
Of course, in the case of upgrading existing lines the design, installation, test and commissioning stages are much more critical. This is mainly due to the challenge of deploying the overlying system without disrupting the revenue service.
Main benefits
The evolution of the technology and the experience gained in operation over the last 30 years means that modern CBTC systems are more reliable and less prone to failure than older train control systems. CBTC systems normally have less wayside equipment and their diagnostic and monitoring tools have been improved, which makes them easier to implement and, more importantly, easier to maintain.
CBTC technology is evolving, making use of the latest techniques and components to offer more compact systems and simpler architectures. For instance, with the advent of modern electronics it has been possible to build in redundancy so that single failures do not adversely impact operational availability.
Moreover, these systems offer complete flexibility in terms of operational schedules or timetables, enabling urban rail operators to respond to the specific traffic demand more swiftly and efficiently and to solve traffic congestion problems. In fact, automatic operation systems have the potential to significantly reduce the headway and improve the traffic capacity compared to manual driving systems.
Finally, it is important to mention that the CBTC systems have proven to be more energy efficient than traditional manually driven systems. The use of new functionalities, such as automatic driving strategies or a better adaptation of the transport offer to the actual demand, allows significant energy savings reducing the power consumption.
Risks
The primary risk of an electronic train control system is that if the communications link between any of the trains is disrupted then all or part of the system might have to enter a failsafe state until the problem is remedied. Depending on the severity of the communication loss, this state can range from vehicles temporarily reducing speed, coming to a halt or operating in a degraded mode until communications are re-established. If communication outage is permanent some sort of contingency operation must be implemented which may consist of manual operation using absolute block or, in the worst case, the substitution of an alternative form of transportation.
As a result, high availability of CBTC systems is crucial for proper operation, especially if such systems are used to increase transport capacity and reduce headway. System redundancy and recovery mechanisms must then be thoroughly checked to achieve a high robustness in operation.
With the increased availability of the CBTC system, there is also a need for extensive training and periodical refresh of system operators on the recovery procedures. In fact, one of the major system hazards in CBTC systems is the probability of human error and improper application of recovery procedures if the system becomes unavailable.
Communications failures can result from equipment malfunction, electromagnetic interference, weak signal strength or saturation of the communications medium. In this case, an interruption can result in a service brake or emergency brake application as real time situational awareness is a critical safety requirement for CBTC and if these interruptions are frequent enough it could seriously impact service. This is the reason why, historically, CBTC systems first implemented radio communication systems in 2003, when the required technology was mature enough for critical applications.
In systems with poor line of sight or spectrum/bandwidth limitations a larger than anticipated number of transponders may be required to enhance the service. This is usually more of an issue with applying CBTC to existing transit systems in tunnels that were not designed from the outset to support it. An alternate method to improve system availability in tunnels is the use of leaky feeder cable that, while having higher initial costs (material + installation) achieves a more reliable radio link.
With the emerging services over open ISM radio bands (i.e. 2.4 GHz and 5.8 GHz) and the potential disruption over critical CBTC services, there is an increasing pressure in the international community (ref. report 676 of UITP organization, Reservation of a Frequency Spectrum for Critical Safety Applications dedicated to Urban Rail Systems) to reserve a frequency band specifically for radio-based urban rail systems. Such decision would help standardize CBTC systems across the market (a growing demand from most operators) and ensure availability for those critical systems.
As a CBTC system is required to have high availability and particularly, allow for a graceful degradation, a secondary method of signaling might be provided to ensure some level of non-degraded service upon partial or complete CBTC unavailability. This is particularly relevant for brownfield implementations (lines with an already existing signalling system) where the infrastructure design cannot be controlled and coexistence with legacy systems is required, at least, temporarily.
For example, the BMT Canarsie Line in New York City was outfitted with a backup automatic block signaling system capable of supporting 12 trains per hour (tph), compared with the 26 tph of the CBTC system. Although this is a rather common architecture for resignalling projects, it can negate some of the cost savings of CBTC if applied to new lines. This is still a key point in the CBTC development (and is still being discussed), since some providers and operators argue that a fully redundant architecture of the CBTC system may however achieve high availability values by itself.
In principle, CBTC systems may be designed with centralized supervision systems in order to improve maintainability and reduce installation costs. If so, there is an increased risk of a single point of failure that could disrupt service over an entire system or line. Fixed block systems usually work with distributed logic that are normally more resistant to such outages. Therefore, a careful analysis of the benefits and risks of a given CBTC architecture (centralized vs. distributed) must be done during system design.
When CBTC is applied to systems that previously ran under complete human control with operators working on sight it may actually result in a reduction in capacity (albeit with an increase in safety). This is because CBTC operates with less positional certainty than human sight and also with greater margins for error as worst-case train parameters are applied for the design (e.g. guaranteed emergency brake rate vs. nominal brake rate). For instance, CBTC introduction in Philly's Center City trolley tunnel resulted initially in a marked increase in travel time and corresponding decrease in capacity when compared with the unprotected manual driving. This was the offset to finally eradicate vehicle collisions which on-sight driving cannot avoid and showcases the usual conflicts between operation and safety.
Architecture
The typical architecture of a modern CBTC system comprises the following main subsystems:
Wayside equipment, which includes the interlocking and the subsystems controlling every zone in the line or network (typically containing the wayside ATP and ATO functionalities). Depending on the suppliers, the architectures may be centralized or distributed. The control of the system is performed from a central command ATS, though local control subsystems may be also included as a fallback.
CBTC onboard equipment, including ATP and ATO subsystems in the vehicles.
Train to wayside communication subsystem, currently based on radio links.
Thus, although a CBTC architecture is always depending on the supplier and its technical approach, the following logical components may be found generally in a typical CBTC architecture:
Onboard ATP system. This subsystem is in charge of the continuous control of the train speed according to the safety profile, and applying the brake if it is necessary. It is also in charge of the communication with the wayside ATP subsystem in order to exchange the information needed for a safe operation (sending speed and braking distance, and receiving the limit of movement authority for a safe operation).
Onboard ATO system. It is responsible for the automatic control of the traction and braking effort in order to keep the train under the threshold established by the ATP subsystem. Its main task is either to facilitate the driver or attendant functions, or even to operate the train in a fully automatic mode while maintaining the traffic regulation targets and passenger comfort. It also allows the selection of different automatic driving strategies to adapt the runtime or even reduce the power consumption.
Wayside ATP system. This subsystem undertakes the management of all the communications with the trains in its area. Additionally, it calculates the limits of movement authority that every train must respect while operating in the mentioned area. This task is therefore critical for the operation safety.
Wayside ATO system. It is in charge of controlling the destination and regulation targets of every train. The wayside ATO functionality provides all the trains in the system with their destination as well as with other data such as the dwell time in the stations. Additionally, it may also perform auxiliary and non-safety related tasks including for instance alarm/event communication and management, or handling skip/hold station commands.
Communication system. The CBTC systems integrate a digital networked radio system by means of antennas or leaky feeder cable for the bi-directional communication between the track equipment and the trains. The 2,4GHz band is commonly used in these systems (same as WiFi), though other alternative frequencies such as 900 MHz (US), 5.8 GHz or other licensed bands may be used as well.
ATS system. The ATS system is commonly integrated within most of the CBTC solutions. Its main task is to act as the interface between the operator and the system, managing the traffic according to the specific regulation criteria. Other tasks may include the event and alarm management as well as acting as the interface with external systems.
Interlocking system. When needed as an independent subsystem (for instance as a fallback system), it will be in charge of the vital control of the trackside objects such as switches or signals, as well as other related functionality. In the case of simpler networks or lines, the functionality of the interlocking may be integrated into the wayside ATP system.
Projects
CBTC technology has been (and is being) successfully implemented for a variety of applications as shown in the figure below (mid 2011). They range from some implementations with short track, limited numbers of vehicles and few operating modes (such as the airport APMs in San Francisco or Washington), to complex overlays on existing railway networks carrying more than a million passengers each day and with more than 100 trains (such as lines 1 and 6 in Madrid Metro, line 3 in Shenzhen Metro, some lines in Paris Metro, New York City Subway and Beijing Subway, or the Sub-Surface network in London Underground).
Despite the difficulty, the table below tries to summarize and reference the main radio-based CBTC systems deployed around the world as well as those ongoing projects being developed. Besides, the table distinguishes between the implementations performed over existing and operative systems (brownfield) and those undertaken on completely new lines (Greenfield).
List
Notes and references
Notes
References
Further reading
Argenia Railway Technologies SafeNet CBTC
Thales SelTrac(R) CBTC
Metro Rail Communication Network Simulation
Train protection systems
Telematics
Railway signalling block systems | Communications-based train control | Technology | 3,773 |
77,608,074 | https://en.wikipedia.org/wiki/Chaulmoogric%20acid | Chaulmoogric acid is a fatty acid found chaulmoogra oil, the oil from the seeds of Hydnocarpus wightianus. It is an unusual fatty acid which has a cyclopentene ring at its terminus instead of being entirely linear like most fatty acids.
It is a white crystalline solid with a melting point of 68.5 °C. It is soluble in ether, chloroform, and ethyl acetate.
In the early 20th century, it was investigated as a possible treatment for leprosy due to the use in traditional medicine of chaulmoogra oil for leprosy.
See also
Hydnocarpic acid
References
Fatty acids
Cyclopentenes | Chaulmoogric acid | Chemistry | 149 |
70,978,956 | https://en.wikipedia.org/wiki/Pi1%20Octantis | {{DISPLAYTITLE:Pi1 Octantis}}
Pi1 Octantis (Pi1 Oct), Latinized π1 Octantis, is a solitary star in the southern circumpolar constellation Octans. It is faintly visible to the naked eye with an apparent magnitude 5.64, and is estimated to be 387 light years away. However, it is receding with a heliocentric radial velocity of .
Pi1 Oct has a stellar classification of G8/K0 III — intermediate between a G8 and K0 giant star. It has 2.74 times the mass of the Sun and an effective temperature of , giving a yellow hue. However, an enlarged radius of yields a luminosity 76 times that of the Sun. Pi1 Oct has a metallicity around solar level and spins with a projected rotational velocity lower than .
References
Octans
K-type giants
G-type giants
130650
073540
5525
PD-82 629
Octantis, 21 | Pi1 Octantis | Astronomy | 206 |
13,864,129 | https://en.wikipedia.org/wiki/Cowdry%20bodies | Cowdry bodies are eosinophilic or basophilic nuclear inclusions composed of nucleic acid and protein seen in cells infected with Herpes simplex virus, Varicella-zoster virus, and Cytomegalovirus. They are named after Edmund Cowdry.
There are two types of intranuclear Cowdry bodies:
Type A (as seen in herpes simplex and VZV)
Type B (as seen in infection with poliovirus and CMV), though it may seem that this is an antiquated and perhaps illusory type.
Light microscopy is used for detection of Cowdry bodies.
References
Histopathology | Cowdry bodies | Chemistry,Biology | 139 |
1,850,051 | https://en.wikipedia.org/wiki/Minefield%20%28Star%20Trek%3A%20Enterprise%29 | "Minefield" is the twenty-ninth episode (production #203) of the science fiction television series Star Trek: Enterprise, the third episode of the second season. In this episode, which aired in October 2002, the spaceship the Enterprise is rocked by an explosion, and the crew tries to deal with the situation.
Plot
Captain Archer is in the captain's private mess, trying, unsuccessfully, to get to know Lieutenant Reed better. Meanwhile, Enterprise nears an uncharted and seemingly uninhabited planet for closer observation. Its proximity triggers a cloaked mine, heavily damaging the ship and flooding Sickbay with injured crew members. Soon, another cloaked mine is detected as it attaches itself to the hull, but it does not immediately detonate for some reason. With the core already damaged, it is feared that a further detonation will totally disable the vessel. Reed then goes EV to try to disarm it. As a backup plan, Archer orders Commander Tucker to prepare to detach and jettison the affected section of hull plating.
Initially Reed's efforts seem to be working, but an alien vessel decloaks, and fires warning shots. Ensign Mayweather steers the ship out of the minefield. During the maneuvers, a jolt accidentally activates another magnetic grappling arm that impales Reed's leg before attaching itself to the spaceship's hull, thus pinning him down. Archer then dons an EVA suit and attempts to disarm the mine under Reed's direction. While disarming the mine, Archer and Reed discuss command style, with Reed advocating a more rigid approach where members of the crew do not socialize with higher-ranking officers.
Enterprise then makes first contact with the Romulan Star Empire when two Warbirds decloak and demand that they jettison the mine with Reed attached. Knowing that any attempt to cut the arm would set off the mine, Reed becomes insistent on sacrificing himself to save Enterprise. Archer returns to the ship and requests two shuttle hatches from a puzzled Commander Tucker, also ordering him to detach the hull plate as planned. As the plates and the attached mine float off, he severs the spike holding Reed. This arms the mine, but Reed and Archer are also able to shield themselves from the resulting explosion. Enterprise then collects the crewmen before warping away from the Romulans.
Production
This is the first episode credited to writer John Shiban. He joined Enterprise having previously worked on The X-Files for seven years. Shiban said it took some adjustment to write for the optimistic future of Star Trek, but that similar to his past work, "space is a scary place, there are creepy aliens and we don't know if we're going to succeed."
Shiban was in part inspired by the television series Danger UXB, which was set during World War II and was about team dealing with unexploded bombs. Developing the episode, they decided Malcolm Reed would be the best character to try and pin down both literally and figuratively. Shiban had planned to use an unknown alien species, but the Romulans were suggested and it seemed like a perfect fit for the story, as the aliens needed to be unhelpful and unwilling to communicate. Captain Kirk in the original series was credited as the first human to look a Romulan eye-to-eye, so Rick Berman was careful not to contradict that, and the Romulans remained unseen throughout the episode. Nonetheless, the introduction of Romulan cloaking technology 100 years before James T. Kirk and Federation outposts would encounter it—something Mr. Spock only theorized was even possible—is considered a clear canon violation.
They also thought it was important to recognize that the ship had been badly damaged, and not simply have the ship repaired and back to normal the next week. This led to the idea for the next episode "Dead Stop", where they must seek out help to repair the ship.
The episode filmed for a week and a half, from July 19 until July 31, 2002. Visual effects producer Dan Curry came in for two days during the next episode, to direct additional exterior ship shots with Dominic Keating.
Filming was complicated by the bulky environmental suits, and the simulation of zero-gravity. Dominic Keating said: "The space suits are the mother of all costumes. They weigh a ton. They are thick rubber. You sweat. I lost five pounds that week." He also said Bakula seriously put his back out. Despite the difficulties he was pleased with how the episode turned out: "it meshed together really well. It delivered some good character moments and good action too."
Technology
"Minefield" is noted for its presentation of Star Trek'''s cloaking technology to a space mine.
Reception
"Minefield" was first broadcast October 2, 2002, on UPN. According to Nielsen a rating of 3.5 and a share of six, meaning 3.5 percent of households in America with TV sets saw the episode, and 6% of households watching television at the time were watching the show. This translated into 5.2 million viewers, an increase over the season premiere. UPN was struggling with low ratings compared to the same time the previous year, and UPN executives argued that the drop was because Buffy and Star Trek had enjoyed unusually strong debuts.
Michelle Erica Green at TrekToday praised Dominic Keating for giving "a truly excellent performance, subtle and moving" while also noting that Reed's "big selfless dramatic gesture was undoubtedly written to create drama in an otherwise formulaic episode that takes no risks," comparing the "ticking-time-bomb plot" to other "genre stories" like The Abyss and The Running Man. Green criticized the writers' failure to "explain space-peeing, like [in] Apollo 13" after risking "TMI syndrome by bringing up bodily functions usually neglected on Trek". She points out a possible continuity issue in that it "seems a bit early" in the "timeline of stealth development" for sophisticated Romulan cloaking technology, and grateful that they "somehow weren't interested in taking over pre-Federation Earth." Green concluded that, "Visually Minefield holds attention because the external images are superb -- the Romulan ships rising over Enterprise, the hull lighting, the design of the mine itself. It's also well-paced even if we could all guess the precise moment when the mine would re-arm itself..." comparing the outcome of the story to "Chekhov (the writer, not the Star Trek character) [who] once said that in any story, if there's a gun on the wall at the beginning, it should go off by the end."
In his 2022 rewatch, Keith DeCandido of Tor.com gave it eight out of ten.
In 2021, The Digital Fix'' said this episode was one of the highlights from season two, and called it a "tense episode" and noting references to the Romulan Star Empire.
Home media
The first home media release of "Minefield" was part of the season two DVD box set, released in the United States on July 26, 2005. A release on Blu-ray Disc for season two occurred on August 20, 2013.
References
External links
Star Trek: Enterprise season 2 episodes
2002 American television episodes
Fiction about bomb disposal | Minefield (Star Trek: Enterprise) | Engineering | 1,511 |
1,858,612 | https://en.wikipedia.org/wiki/Critical%20micelle%20concentration | In colloidal and surface chemistry, the critical micelle concentration (CMC) is defined as the concentration of surfactants above which micelles form and all additional surfactants added to the system will form micelles.
The CMC is an important characteristic of a surfactant. Before reaching the CMC, the surface tension changes strongly with the concentration of the surfactant. After reaching the CMC, the surface tension remains relatively constant or changes with a lower slope. The value of the CMC for a given dispersant in a given medium depends on temperature, pressure, and (sometimes strongly) on the presence and concentration of other surface active substances and electrolytes. Micelles only form above critical micelle temperature.
For example, the value of CMC for sodium dodecyl sulfate in water (without other additives or salts) at 25 °C, atmospheric pressure, is 8x10−3 mol/L.
Description
Upon introducing surfactants (or any surface active materials) into a system, they will initially partition into the interface, reducing the system free energy by:
lowering the energy of the interface (calculated as area times surface tension), and
removing the hydrophobic parts of the surfactant from contact with water.
Subsequently, when the surface coverage by the surfactants increases, the surface free energy (surface tension) decreases and the surfactants start aggregating into micelles, thus again decreasing the system's free energy by decreasing the contact area of hydrophobic parts of the surfactant with water. Upon reaching CMC, any further addition of surfactants will just increase the number of micelles (in the ideal case).
According to one well-known definition, CMC is the total concentration of surfactants under the conditions:
if C = CMC, (d3/dCt3) = 0
= A[Cs] + B[Cm]; i.e., in words Cs = [single surfactant ion] , Cm = [micelles] and A and B are proportionality constants
Ct = Cs + NCm; i.e., N = represents the number of detergent ions per micelle
Measurement
The CMC generally depends on the method of measuring the samples, since A and B depend on the properties of the solution such as conductance, photochemical characteristics, or surface tension. When the degree of aggregation is monodisperse, then the CMC is not related to the method of measurement. On the other hand, when the degree of aggregation is polydisperse, then CMC is related to both the method of measurement and the dispersion.
The common procedure to determine the CMC from experimental data is to look for the intersection (inflection point) of two straight lines traced through plots of the measured property versus the surfactant concentration. This visual data analysis method is highly subjective and can lead to very different CMC values depending on the type of representation, the quality of the data and the chosen interval around the CMC. A preferred method is the fit of the experimental data with a model of the measured property. Fit functions for properties such as electrical conductivity, surface tension, NMR chemical shifts, absorption, self-diffusion coefficients, fluorescence intensity and mean translational diffusion coefficient of fluorescent dyes in surfactant solutions have been presented. These fit functions are based on a model for the concentrations of monomeric and micellised surfactants in solution, which establishes a well-defined analytical definition of the CMC, independent from the technique.
The CMC is the concentration of surfactants in the bulk at which micelles start forming. The word bulk is important because surfactants partition between the bulk and interface and CMC is independent of interface and is therefore a characteristic of the surfactant molecule. In most situations, such as surface tension measurements or conductivity measurements, the amount of surfactant at the interface is negligible compared to that in the bulk and CMC can be approximated by the total concentration. In practice, CMC data is usually collected using laboratory instruments which allow the process to be partially automated, for instance by using specialised tensiometers.
Practical considerations
When the interfacial areas are large, the amount of surfactant at the interface cannot be neglected. If, for example, air bubbles are introduced into a solution of a surfactant above CMC, these bubbles, as they rise to the surface, remove surfactants from the bulk to the top of the solution creating a foam column and thus reducing the concentration in bulk to below CMC. This is one of the easiest methods to remove surfactants from effluents (see foam flotation). Thus in foams with sufficient interfacial area are devoid of micelles. Similar reasoning holds for emulsions.
The other situation arises in detergents. One initially starts off with concentrations greater than CMC in water and on adding fabric with large interfacial area, the surfactant concentration drops below CMC and no micelles remain at equilibrium. Therefore, the solubilization plays a minor role in detergents. Removal of oily soil occurs by modification of the contact angles and release of oil in the form of emulsion.
In petroleum industry, CMC is considered prior to injecting surfactant in reservoir regarding enhanced oil recovery (EOR) application. Below the CMC point, interfacial tension between oil and water phase is no longer effectively reduced. If the concentration of the surfactant is kept a little above the CMC, the additional amount covers the dissolution with existing brine in the reservoir. It is desired that the surfactant will work at the lowest interfacial tension (IFT).
See also
Detergent
Micelle
Surface tension
Surfactant
References
External links
Theory of CMC measurement
CMCs and molecular weights of several detergents on OpenWetWare
Colloidal chemistry | Critical micelle concentration | Chemistry | 1,222 |
24,044,309 | https://en.wikipedia.org/wiki/Sleep%20induction | Sleep induction is the deliberate effort to bring on sleep by various techniques or medicinal means, is practiced to lengthen periods of sleep, increase the effectiveness of sleep, and to reduce or prevent insomnia.
Darkness and quiet
Dim or dark surroundings with a peaceful, quiet sound level are conducive to sleep. Retiring to a bedroom, drawing the curtains to block out daylight and closing the door are common methods of achieving this. When this is not possible, such as on an airplane, other methods may be used, such as masks and earplugs for sleeping which airlines commonly issue to passengers for this purpose.
Activities
Guided imagery
To relax and encourage sleep, a meditation in the form of guided imagery may be used. The stereotypical method is by counting sheep, imagining sheep jumping over a fence, while counting them.
In most depictions of the activity, the person envisions an endless series of identical white sheep jumping over a fence, while counting the number that do so. The idea, presumably, is to induce boredom while occupying the mind with something simple, repetitive, and rhythmic, all of which are known to help humans sleep. It may also simulate REM sleep, tiring people's eyes.
According to a BBC experiment conducted by researchers at Oxford University, counting sheep is actually an inferior means of inducing sleep.
Hot bath
The daily sleep/wake cycle is linked to the daily body temperature cycle. For this reason, a hot bath which raises the core body temperature has been found to improve the duration and quality of sleep. A 30-minute soak in a bath of – which raises the core body temperature by one degree – is suitable for this purpose.
A systematic review and meta-analysis of 17 different studies found that taking a warm bath or shower 1–2 hours before bedtime for as little as 10 minutes shortens the sleep onset time and improves sleep efficiency and subjective sleep quality and increases the amount of deep sleep.
Sex
Sexual intercourse, and specifically orgasm, may have an effect on the ability to fall asleep for some people. The period after orgasm (known as a refractory period) is often a time of increased relaxation, attributed to the release of the neurohormones oxytocin and prolactin.
Yawning
Yawning is commonly associated with imminent sleep, but it seems to be a measure to maintain arousal when sleepy and so it actually prevents sleep rather than inducing it. Yawning may be a cue that the body is tired and ready for sleep, but deliberate attempts to yawn may have the opposite effect of sleep induction.
Sleeping pills
Hypnotics, sometimes referred to as sleeping pills, may be prescribed by a physician, but their long-term efficacy is poor and they have numerous adverse effects including daytime drowsiness, accidents, memory disorders and withdrawal symptoms. If they are to be taken, the preferred choices are benzodiazepines with short-lasting effects such as temazepam or the newer Z-medicines such as zopiclone. Alternatively, in isolated cases sedatives such as barbiturates may be prescribed.
Nonprescription medications
A number of nonprescription medications have shown to be effective in promoting sleep. The amino acid tryptophan and its related compounds 5-HTP and melatonin, have common use, with the prescription medication ramelteon operating on the same biochemical pathway. The herb valerian can also be effective in gently inducing a relaxed state which is conducive to sleep.
Food and drink
An urban legend states that certain foods such as turkey and bananas are rich in tryptophan and thus assist sleep, although this has not been confirmed by research.
Alcohol
An alcoholic drink or nightcap is a long-standing folk method which will induce sleep, as alcohol is a sedative. However, when the alcohol blood level subsides, there is a rebound effect: the person becomes more alert and so tends to wake up too soon. Also, if they continue to sleep, REM sleep is promoted, and this may cause vivid nightmares which can reduce the quality of the sleep.
Warm milk
A cup of warm milk or a milk-based drink is traditionally used for sleep induction. Hot chocolate is also a traditional bedtime drink but this contains high levels of xanthines (caffeine and theobromine), which are stimulants and therefore may be counterproductive. Also, a pinch of turmeric powder with warm milk reduces stress and induces sleep. The flavor of the milk can be improved by adding honey and/or vanilla.
See also
Caffeine-induced sleep disorder
Hypnotic induction
Postprandial dip
Postprandial somnolence
References
Sleep | Sleep induction | Biology | 963 |
38,616,928 | https://en.wikipedia.org/wiki/Kellogg%27s%20theorem | Kellogg's theorem is a pair of related results in the mathematical study of the regularity of harmonic functions on sufficiently smooth domains by Oliver Dimon Kellogg.
In the first version, it states that, for , if the domain's boundary is of class and the k-th derivatives of the boundary are Dini continuous, then the harmonic functions are uniformly as well. The second, more common version of the theorem states that for domains which are , if the boundary data is of class , then so is the harmonic function itself.
Kellogg's method of proof analyzes the representation of harmonic functions provided by the Poisson kernel, applied to an interior tangent sphere.
In modern presentations, Kellogg's theorem is usually covered as a specific case of the boundary Schauder estimates for elliptic partial differential equations.
See also
Schauder estimates
Sources
Harmonic functions
Potential theory | Kellogg's theorem | Mathematics | 174 |
25,084,036 | https://en.wikipedia.org/wiki/USB%20image | A USB image — is bootable image of Operating system (OS) or other software where the boot loader is located on a USB flash drive, or another USB device (with memory storage) instead of conventional CD or DVD discs. The operating system loads from the USB device either to load it much like a Live CD that runs OS or any other software from the storage or installs OS itself. USB image runs off of the USB device the whole time. A USB image is easier to carry, can be stored more safely than a conventional CD or DVD. Drawbacks are that some older devices may not support USB booting and that the USB storage devices lifespan might be shortened.
Ubuntu has included a utility for installing an operating system image file to a USB flash drive since version 9.10. Windows support also has added a step by step on how to set up a USB device as a bootable drive.
Software
Both graphical applications and command line utilities are available for authoring bootable operating system images. dd is a utility commonly found in Unix operating systems that allow creation of bootable images.
Benefits and limitations
Benefits
In contrast to live CDs, a USB image is easier to transport and to store (e.g. a pocket, attached to a key chain, carried in a bag, locked away in a safe), instead of a CD, which can be damaged and corrupted easier, and also harder
Also after OS installation, the USB can be removed after installation, and the operating system will run without the USB stick inserted into the computer, allowing installation on multiple OS devices with a single USB (This is known for Win 8.1 and newer Microsoft Win versions, since they fully support the USB image installation)
The absence of moving parts in USB flash devices allows true random access avoiding the rotational latency and seek time, meaning small programs will start faster from a USB flash drive than from a local hard disk or live CD. However, as USB devices typically achieve lower data transfer rates than internal hard drives, booting from older computers that lack USB 2.0 or newer can be very slow.
Limitations
Some older systems have limited support for USB, since their BIOSes were not designed with such purpose at the time. Other devices may not be booted from USB, if in BIOS it is set to 'Legacy mode'Legacy mode.
Due to the additional write cycles that occur on a full installation, the life span of the used USB may be shortened. To mitigate this, a USB hard drive can be used, as they give better performance than the USB stick, regardless of the connector.
See also
UEFI
ImageUSB (Per Partition or mounted Drive)
Live USB
References
Booting | USB image | Technology | 546 |
2,811,312 | https://en.wikipedia.org/wiki/China%20Compulsory%20Certificate | The China Compulsory Certificate mark, commonly known as a CCC Mark, is a compulsory safety mark for many products imported, sold or used in the Chinese market. It was implemented on May 1, 2002, and became fully effective on August 1, 2003.
It is the result of the integration of China's two previous compulsory inspection systems, namely "CCIB" (Safety Mark, introduced in 1989 and required for products in 47 product categories) and "CCEE" (also known as "Great Wall" Mark, for electrical commodities in 7 product categories), into a single procedure.
Applicable products
The CCC mark is required for both Chinese-manufactured and foreign-imported products; the certification process involves the Guobiao standards.
The mandatory products include, among others:
Electrical wires and cables
Circuit switches, electric devices for protection or connection
Low-voltage Electrical Apparatus
Low power motors
Electric tools
Welding machines
Household and similar electrical appliances
Audio and video apparatus (not including the audio apparatus for broadcasting service and automobiles)
Information technology equipment
Lighting apparatus (not including the lighting apparatus with the voltage lower than 36V)
Motor vehicles and safety accessories
Motor vehicle Tires
Safety Glasses
Agricultural Machinery
Telecommunication Terminal Products
Fire Fighting Equipment
Safety Protection Products
Wireless LAN products
Decoration Materials
Toys
Implementation rules
Apart from the GB Standard, the implementation rules are the second important component that form the basis of CCC certification. The implementation rules determine the process of the CCC-Certification and list the mandatory products for the certification. Based on many regulatory amendments, it is important to get the latest version of the implementation rules before starting the certification process.
In 2014, a comprehensive regulatory amendment of the Implementation Rules had taken place. The major changes are:
Amendments for Automotive Parts
Introduction of factory levels (A-D)
Self-made products for end products do not require a CCC Certificate anymore
Administration
The CCC mark is administered by the CNCA (Certification and Accreditation Administration of the People's Republic of China). The China Quality Certification Center (CQC) is designated by CNCA to process CCC mark applications and defines the products that need CCC. The products are summed up in overall product categories.
Additionally, the following certification authorities are responsible for specific groups of products:
CCAP (China Certification Centre for Automotive Products) products in the automotive area
CSP (China Certification Center for Security and Protection) certifies security products, forensic technology and products for road safety
CSCG (China Safety Global Certification Centre) for safety glass
CEMC (China Certification Centre for Electromagnetic Compatibility) all electronic products
Follow-up certification
The CCC certificate and the Permission of Printing, which allows the manufacturer to mark the CCC-certified product with the CCC mark, must be renewed annually in order to keep the validity of the certificate. The renewal can only be done through a follow-up certification. Part of the follow-up certification is a one-day factory audit.
IT security products
On April 27, 2009, China announced 13 categories of the IT security sector products that must conform to the additional authority that was newly bestowed on the CCC (China Compulsory Certificate), and this requirement was to be put into effect on May 1, 2009. In view of the security measures taken by China, there was a seemingly high likelihood that they would request the full disclosure of all source codes running on any and all devices, imported or otherwise. The divulgence of such source codes is of great concern to countries like the U.S., Japan, the EU, and South Korea; all four asked China to reverse this decision and objected to the implementation of the Chinese plan. Thus, the certification agents were soon limited to the organizations and entities within China - a compromise of sorts. However, despite this restriction, there still arose other concerns as to whether source codes and trade secrets could be leaked to the private sectors. In response to these enduring concerns, China altered the previously planned CCC policy programme. Instead of administering broad and stringent encroachments upon the relevant categories of imports (primarily, computer technology), they decided to engage in an alternate regulatory action solely affecting government procurement projects, while simultaneously postponing the enactment of the policy programme to May 1, 2010. China also stated that the number of applicable CCC product categories is not to expand past the current 13 already in place.
Protection of intellectual property in the CCC certification
Although the CCC-Certification's only purpose is to ensure compliance of products with the Chinese standards, many companies are worried that infringements of their trademarks or patent occur during the CCC-Certification.
Especially regarding the following steps, companies are troubled:
Comprehensive product information for the application are required
Type test of the products in an accredited test laboratory in China
Factory audit by Chinese inspectors examine the factory
For the best possible protection for the own products there are:
Protecting Intellectual Property Rights in China
See also
Common Criteria
National Development and Reform Commission
CE marking#"China Export", an urban myth in Europe about a mark that does not exist
References
External links
China Compulsory Certification
2002 introductions
Certification marks
Economy of China
Safety codes
Foreign trade of China | China Compulsory Certificate | Mathematics | 1,030 |
1,968,690 | https://en.wikipedia.org/wiki/Wide%20Angle%20Search%20for%20Planets | WASP or Wide Angle Search for Planets is an international consortium of several academic organisations performing an ultra-wide angle search for exoplanets using transit photometry. The array of robotic telescopes aims to survey the entire sky, simultaneously monitoring many thousands of stars at an apparent visual magnitude from about 7 to 13.
WASP is the detection program composed of the Isaac Newton Group, IAC and six universities from the United Kingdom. The two continuously operating, robotic observatories cover the Northern and Southern Hemisphere, respectively. SuperWASP-North is at Roque de los Muchachos Observatory on the mountain of that name which dominates La Palma in the Canary Islands. WASP-South is at the South African Astronomical Observatory, Sutherland in the arid Roggeveld Mountains of South Africa. These use eight wide-angle cameras that simultaneously monitor the sky for planetary transit events and allow the monitoring of millions of stars simultaneously, enabling the detection of rare transit events.
Instruments used for follow-up characterization employing doppler spectroscopy to determine the exoplanet's mass include the HARPS spectrograph of ESO's 3.6-metre telescope as well as the Swiss Euler Telescope, both located at La Silla Observatory, Chile. WASP's design has also been adopted by the Next-Generation Transit Survey. As of 2016, the Extrasolar Planets Encyclopaedia data base contains a total of 2,107 extrasolar planets of which 118 were discoveries by WASP.
Equipment
WASP consists of two robotic observatories; SuperWASP-North at Roque de los Muchachos Observatory on the island of La Palma in the Canaries and WASP-South at the South African Astronomical Observatory, South Africa. Each observatory consists of an array of eight Canon 200 mm f1.8 lenses backed by high quality science-grade CCDs, the model used is the iKon-L manufactured by Andor Technology. The telescopes are mounted on an equatorial telescope mount built by Optical Mechanics, Inc. The large field of view of the Canon lenses gives each observatory a massive sky coverage of 490 square degrees per pointing.
Function
The observatories continuously monitor the sky, taking a set of images approximately once per minute, gathering up to 100 gigabytes of data per night. By using the transit method, data collected from WASP can be used to measure the brightness of each star in each image, and small dips in brightness caused by large planets passing in front of their parent stars can be searched for.
One of the main purpose of WASP was to revolutionize the understanding of planet formation, paving the way for future space missions searching for 'Earth'-like worlds.
Structure
WASP is operated by a consortium of academic institutions which include:
Instituto de Astrofisica de Canarias
Isaac Newton Group of Telescopes
Keele University
Open University
Queen's University Belfast
St. Andrews University
University of Leicester
Warwick University.
On 26 September 2006, the team reported the discovery of two extrasolar planets: WASP-1b (orbiting at 0.038 AU (6 million km) from star once every 2.5 days) and WASP-2b (orbiting three-quarters that radius once every 2 days).
On 31 October 2007, the team reported the discovery of three extrasolar planets: WASP-3b, WASP-4b and WASP-5b. All three planets are similar to Jovian mass and are so close to their respective stars that their orbital periods are all less than two days. These are among the shortest orbital periods discovered. The surface temperatures of the planets should be more than 2000 degrees Celsius, owing to their short distances from their respective stars. The WASP4b and WASP-5b are the first planets discovered by the cameras and researchers in South Africa. WASP-3b is the third planet discovered by the equivalent in La Palma.
In August 2009, the discovery of WASP-17b was announced, believed to be the first planet ever discovered to orbit in the opposite direction to the spin of its star, WASP-17.
Discoveries and follow-up observations
WASP-9b was determined to be a false positive after its initial public announcement as a planet, and the identifier was not subsequently reassigned to a real planetary system.
See also
Lists of exoplanets
Roque de los Muchachos Observatory
South African Astronomical Observatory
Science and Technology Facilities Council
V1400 Centauri (SWASP J1407b)
Other extrasolar planet search projects
Trans-Atlantic Exoplanet Survey or TrES
XO Telescope or XO
Kilodegree Extremely Little Telescope or KELT
Next-Generation Transit Survey or NGTS
HATNet Project or HAT
Extrasolar planet searching spacecraft
References
External links
WASP Planets
WASP primary website
WASP-South live status
Public archive at the NASA Exoplanet Archive
The Extrasolar Planets Encyclopaedia
News items
QUB in April 2008
Reaching for the stars in October 2007
BBC News report: Planets have scientists buzzing in September 2006
Video clips
Keele University
Exoplanet search projects by small telescope
Astronomical surveys
Robotic telescopes
Astronomical observatories in South Africa
Astronomical observatories in La Palma
Astronomy organizations | Wide Angle Search for Planets | Astronomy | 1,046 |
31,928,335 | https://en.wikipedia.org/wiki/Crocco%27s%20theorem | Crocco's theorem is an aerodynamic theorem relating the flow velocity, vorticity, and stagnation pressure (or entropy) of a potential flow. Crocco's theorem gives the relation between the thermodynamics and fluid kinematics. The theorem was first enunciated by Alexander Friedmann for the particular case of a perfect gas and published in 1922:
However, usually this theorem is connected with the name of Italian scientist Luigi Crocco, a son of Gaetano Crocco.
Consider an element of fluid in the flow field subjected to translational and rotational motion: because stagnation pressure loss and entropy generation can be viewed as essentially the same thing, there are three popular forms for writing Crocco's theorem:
Stagnation pressure:
Entropy (the following form holds for plane steady flows):
Momentum:
In the above equations, is the flow velocity vector, is the vorticity, is the specific volume, is the stagnation pressure, is temperature, is specific entropy, is specific enthalpy, is specific body force, and is the direction normal to the streamlines. All quantities considered (entropy, enthalpy, and body force) are specific, in the sense of "per unit mass".
References
Fluid dynamics
Aerodynamics | Crocco's theorem | Chemistry,Engineering | 266 |
19,920,940 | https://en.wikipedia.org/wiki/RW%20Cephei | RW Cephei is a K-type hypergiant and a semirregular variable star in the constellation Cepheus, at the edge of the Sharpless 132 H II region and close to the small open cluster Berkeley 94. It is among the largest stars known with a radius of 1,100 times that of the Sun (), nearly as large as the orbit of Jupiter.
In 2022, the star underwent a "great dimming" event similar to Betelgeuse.
The temperature intermediate between the red supergiants and yellow hypergiants has led to it being variously considered as a red hypergiant or yellow hypergiant.
Observational history
The first documented sighting of RW Cephei dates back to 1746 when it was included in a star catalog compiled by James Bradley. It has been described as a red star since at least the 1840s, when Friedrich Wilhelm Argelander noted it as "very red" in his catalog. RW Cephei was independently discovered to be variable by Thomas William Backhouse and Henrietta Swan Leavitt in 1899 and 1907 respectively, but has been suspected to be variable by Angelo Secchi since at least 1868. The star was designated RW in 1908, being the fifteenth discovered variable in Cepheus. Analysis of spectra in 1942 revealed RW Cephei to be a highly luminous hypergiant star, appearing more luminous than Mu Cephei. More detailed spectral studies in 1956 and 1972 revealed unique spectral features, setting it apart from the other known hypergiants. Since then, the star has been studied infrequently over the decades. In late 2022, RW Cephei was announced to be undergoing a great dimming event, and it was subsequently observed by the CHARA interferometry array in December.
Distance
The distance to RW Cephei has been estimated on the basis of its spectroscopic luminosity and it is assumed to be a member of the Cepheus OB1 association, placing it within the Perseus Arm of the Milky Way. The Gaia Data Release 2 and Gaia Early Data Release 3 parallaxes lead to distance estimates of and respectively. Cepheus OB1 is generally considered to be at about . The open cluster Berkeley 94, of which RW Cephei may be a member, is thought to be at a distance of . The star and cluster are part of the larger star-forming region Sh 2-132.
Variability
The magnitude range of RW Cephei was given as 8.2–8.8 using photographic plates in the initial report, while later studies found the photographic range to be from 8.6–10.7, noting that maxima and minima cannot be derived with any certainty. Other authors estimate an amplitude of only around 0.5 magnitudes. Modern estimates put the range of variability from 6.0 to 7.6 in the V-band.
RW Cephei has been classified as a semi-regular variable star of type SRd, meaning that it is a slowly varying yellow giant or supergiant. The General Catalogue of Variable Stars cites a 1952 study giving a period of approximately 346 days, while other studies suggest different periods and certainly no strong periodicity.
Great dimming
In December of 2022, the star was reported by two astronomers to be going through a "great dimming", reaching a fainter than usual magnitude of 7.6. It was speculated to be caused by short periods of enhanced mass loss leading to the condensation of dust that partially obscures the stellar photosphere. This was later confirmed by observations with the CHARA array, revealing a dark patch on the western side of the star suggested to be a dust cloud released in a recent surface mass ejection. An unusually bright maximum attained in 2019 right before the dimming was suspected to be caused by an energetic convective upwelling of hot gas, later being expelled and cooling into a dust cloud obscuring the star. The event is compared to the great dimming of Betelgeuse that happened in late 2019 and the dimming events seen in the historical light curve of VY Canis Majoris.
Spectra taken by an amateur astronomer show the appearance of several new emission lines during the dimming, most notably H-α and the K I lines at 766.5 and 769.9 nm. The H-α line is blueshifted by ~40 km/s relative to the star, suggesting the source of the emission is expanding outwards.
Previous observations using photographic plates taken between 1948 and 1951 reveal a similar dimming from magnitude 9.16 down to 9.5, followed by a rapid re-brightening to magnitude 8.9.
Spectrum
RW Cephei displays many complex lines in its spectrum, many of which are stronger and more broad than usual. An initial study in 1956 focusing on the blue spectral region found many metal absorption lines with two components separated by a central maximum, attributed to emission superposed on an absorption line widened due to turbulence. The shortward absorption components were found to be significantly stronger than the longward components, caused by an outward moving shell of gas. A follow-up study in 1972 focusing on redder spectral regions found unusually strong Na D lines too intense to be caused by the interstellar medium. The Fe I line was found to be 30% stronger than in normal K-type supergiants, while the Ti I and V I lines were of the same strength or weaker. With these peculiar spectral features, the star finds no counterpart among the known hypergiants, with only Rho Cassiopeiae displaying remotely similar features.
The spectrum has been classified as early as G8 and as late as M2, but it isn't clear that there has been actual variation. In the first MK spectral atlas, it was listed M0:Ia. RW Cephei was later listed as the standard star for spectral type G8 Ia, then as the standard for K0 0-Ia. Based on the same spectra it was adjusted to the standard star for type K2 0-Ia. Molecular bands characteristic of M-class stars are seen in infrared spectra, but not always in optical spectra.
Physical properties
The temperature of RW Cephei is uncertain, with contradictory excitation strengths in the spectrum. A simple color correlation temperature fit gives temperatures around 3,749 K, while a full spectrum fit gives a temperature of 5,018 K. Another fit using J-band spectral data and MARCS stellar models gives a temperature of K. This fit also results in a metallicity of [Fe/H] = , indicating the star is slightly metal-rich relative to the Sun. A newer study finds a temperature of 4,400 K consistent with its spectral type. Based on the CO line strength at 2.29 μm it is indicated that RW Cephei dropped in temperature from 4,200 K to 3,900 K during the dimming.
Luminosities have been derived on the basis of a membership to Cepheus OB1, with studies finding exceptionally high luminosities of , or . A more recent study finds a somewhat lower luminosity of using the spectral energy distribution of a DUSTY model fit.
Imaging of RW Cephei by the CHARA array reveals the star to be box-like in shape. Images obtained using the SURFING algorithm result in a limb-darkened angular diameter of 2.45 mas, corresponding to a linear radius of depending on the adopted distance. In 2024, the star's size was shown to have increased by 8% since its 2022 dimming. The angular diameter, combined with an average distance of to Berkeley 94, gives a radius of .
Surroundings
The star shows evidence for a significant amount of circumstellar material in its spectrum. The IRAS low resolution spectrum shows signatures of optically thick silicate emission at 10 and 18 μm, an indication for high amounts of mass loss. Emission in the first-overtone SiO bands was suspected in 1982, and later confirmed using higher resolution spectra showing clear signs of emission at 4.0, 4.04 and 4.08 μm. Direct imaging in mid-infrared bands reveals the source to be extended, having an azimuthally symmetric structure similar to IRC +10420. The radius of this emission has been estimated to be ~0.3–0.4 arcseconds at 11.9 μm, corresponding to a physical radius of ~1,000–1,400 au at a distance of 3.4 kpc.
Mass loss
The current mass loss rate of RW Cephei has been determined to be ~/yr using a DUSTY model fit. A previous study estimated /yr using silicate line strengths and adopting a distance of 2.8 kpc. Analysis of the surrounding mid-infrared emission indicates that RW Cephei ended a period of enhanced mass loss ~95–140 years ago, suggesting that it has left the red supergiant phase and is currently evolving towards hotter temperatures. The current mass loss phase appears to be dominated by several mass ejections, including the observed "great dimming".
See also
Betelgeuse and VY Canis Majoris, similar cool massive stars that have undergone one or more dimming events
HR 5171, a similar star
WOH G64
UY Scuti
Westerlund 1 W26
Notes
References
K-type hypergiants
Cephei, RW
Cepheus (constellation)
212466
BD+55 2737
110504
Semiregular variable stars
G-type hypergiants
M-type hypergiants | RW Cephei | Astronomy | 1,978 |
15,072,571 | https://en.wikipedia.org/wiki/MRPS7 | 28S ribosomal protein S7, mitochondrial is a protein that in humans is encoded by the MRPS7 gene.
Mammalian mitochondrial ribosomal proteins are encoded by nuclear genes and help in protein synthesis within the mitochondrion. Mitochondrial ribosomes (mitoribosomes) consist of a small 28S subunit and a large 39S subunit. They have an estimated 75% protein to rRNA composition compared to prokaryotic ribosomes, where this ratio is reversed. Another difference between mammalian mitoribosomes and prokaryotic ribosomes is that the latter contain a 5S rRNA. Among different species, the proteins comprising the mitoribosome differ greatly in sequence, and sometimes in biochemical properties, which prevents easy recognition by sequence homology. This gene encodes a 28S subunit protein. In the prokaryotic ribosome, the comparable protein is thought to play an essential role in organizing the 3' domain of the 16S rRNA in the vicinity of the P- and A-sites. Pseudogenes corresponding to this gene are found on chromosomes 8p and 12p.
References
Further reading
Ribosomal proteins | MRPS7 | Chemistry | 238 |
6,327,216 | https://en.wikipedia.org/wiki/Pocosin | Pocosin is a type of palustrine wetland with deep, acidic, sandy, peat soils. Groundwater saturates the soil except during brief seasonal dry spells and during prolonged droughts. Pocosin soils are nutrient-deficient (oligotrophic), especially in phosphorus.
Pocosins occur in the southern portions of the Atlantic coastal plain of North America, spanning from southeastern Virginia, through North Carolina, and into South Carolina. The majority of pocosins are found in North Carolina. The Alligator River National Wildlife Refuge was created in 1984 to help preserve pocosin wetlands. The nearby Cedar Island National Wildlife Refuge also protects pocosin habitat.
Characteristics
Pocosins occupy poorly drained higher ground between streams and floodplains. Seeps cause the inundation. There are often perched water tables underlying pocosins.
Shrub vegetation is common in a pocosin ecosystem. Pocosins are sometimes called shrub bogs. Pond pines (Pinus serotina) dominate pocosin forests, but loblolly pine (Pinus taeda) and longleaf pine (Pinus palustris) are also associated with pocosins. Additionally, pocosins are home to rare and threatened plant species including Venus flytrap (Dionaea muscipula) and sweet pitcher plant (Sarracenia rubra).
A distinction is sometimes made between short pocosins, which have shorter trees (less than ), deeper peat, and fewer soil nutrients, and tall pocosins, which have taller trees (greater than ), shallow peat, and more nutrient-rich soil. Where soil saturation is less frequent and peat depths shallower, pocosins transition into pine flatwoods. A loose definition of "pocosin" can include all shrub and forest bogs, as well as stands of Atlantic white cedar (Chamaecyparis thyoides) and loblolly pine on the Atlantic coastal plain.
Pocosins are formed by the accumulation of organic matter, resembling black muck, that is built up over thousands of years. This accumulation of material causes the area to be highly acidic and nutrient-deficient. The thickness of the organic buildup varies depending on one's location within the pocosin. Near the edges the buildup can be several inches thick but toward the center it can be up to several feet thick. Vegetation on the pocosin varies throughout. At the edges more pond pine is found with an abundance of titi, zenobia (a shrub unique to pocosins), and greenbrier vines. Closer to the center, thin stunted trees are typically found and fewer shrubs and vines are present.
Pocosins are important to migratory birds due to the abundance of various types of berries.
Pocosin ecosystems are fire-adapted (pyrophytic). Pond pines exhibit serotiny, such that wildfire can create a pond pine seedbed in the soil. Wildfires in pocosins tend to be intense, sometimes burning deep into the peat, resulting in small lakes and ponds.
Wildfires occurring about once a decade tend to cause pond pines to dominate over other trees, and cane (Arundinaria) rather than shrubs to dominate the understory. More frequent fires result in a pyrophytic shrub understory. Annual fires prevent shrub growth and thin the pond pine forest cover, creating a flooded savanna with grass, sedge, and herb groundcover.
Etymology
The word pocosin has Eastern Algonquian roots. Sources have long attested that the term translates into English as "swamp-on-a-hill," but evidence for this precise translation is lacking. The city of Poquoson, Virginia, located in the coastal plain of Virginia (see Tidewater region of Virginia) derives its name from this geographic feature.
References
External links
Detailed Ecological Description of Basin Pocosin Communities
Ecology
Ecoregions of the United States
Wetlands of North Carolina
Wetlands of Virginia
Wetlands of South Carolina
Pocosins | Pocosin | Biology | 820 |
62,068,304 | https://en.wikipedia.org/wiki/Kathryn%20Beers | Kathryn L. Beers is an American polymer chemist. Beers is Leader of the Polymers and Complex Fluids group in the Materials Science and Engineering Division at the National Institute of Standards and Technology. Her research interests include microreactors and microfluidics, advances in polymer synthesis and reaction monitoring, macromolecular separations, integrated and high throughput measurements of polymeric materials, degradable and renewable polymeric materials, and sustainable materials.
Early life and education
Beers is a native of the Washington metropolitan area. She completed a B.S. in chemistry at the College of William & Mary in 1994. Her Honors College undergraduate thesis was titled The effects of deuteration of ferromagnetic properties: a study of single crystal Fe[S2CN(C2D5)2]2Cl. In 1996, she earned an M.S. in polymer science at Carnegie Mellon University. She completed a Ph.D. in chemistry at Carnegie Mellon in 2000. She worked with professor Krzysztof Matyjaszewski. Her dissertation was titled Design, synthesis and properties of comb copolymers with variable grafting density by controlled radical polymerization. From 2000 to 2002, she was a National Research Council postdoctoral fellow in the Polymers Division at the National Institute of Standards and Technology (NIST).
Career
From 2002 to 2007, Beers was a research chemist and project leader in the polymer formulations at the NIST Combinatorial Methods Center (NCMC) in the Polymers Division at NIST. Beers was the Assistant Director for Physical Sciences and Engineering in the Office of Science and Technology Policy (OSTP) from 2007 to 2008. While at OSTP, Beers oversaw a portfolio including the Office of Science, Science Mission Directorate, and portions of the National Science Foundation. She worked to coordinate inter-agency and international cooperation and interaction with the physical science field. She was the director of the NIST Combinatorial Methods Center (NCMC) from 2008 to 2009. From 2008 to 2012, she was a Project Leader of the Renewable Polymers project, and Group Leader of the Sustainable Polymers Group. Since 2013, Beers serves as Group Leader of the Polymers and Complex Fluids group in the Materials Science and Engineering Division at NIST.
Beers became a member of the American Chemical Society (ACS) in 1993. She has served as secretary of the ACS Division of Polymer Chemistry and served in the POLY Chair series from 2012 to 2017. She became a member of the Materials Research Society in 2001 and Sigma Xi in 2004. She is also a member of the American Institute of Chemical Engineers.
Research
Beers researches microreactors and microfluidics, advances in polymer synthesis and reaction monitoring, macromolecular separations, integrated and high throughput measurements of polymeric materials, degradable and renewable polymeric materials, and sustainable materials.
Awards and honors
In 2005, Beers was awarded the Department of Commerce Silver Medal. She was a 2006 Department of Commerce Science and Technology Policy (ComSci) fellow. In 2007, she received the Presidential Early Career Award for Scientists and Engineers.
References
External links
Living people
21st-century American chemists
American women chemists
20th-century American chemists
National Institute of Standards and Technology people
Women materials scientists and engineers
American materials scientists
College of William & Mary alumni
Carnegie Mellon University alumni
Office of Science and Technology Policy officials
Polymer scientists and engineers
Year of birth missing (living people)
21st-century American women scientists | Kathryn Beers | Materials_science,Technology | 696 |
953,903 | https://en.wikipedia.org/wiki/Chainik | Chainik (East Slavic: чайник, "teakettle", "teapot") is a term that implies both ignorance and a certain amount of willingness to learn (as well as a propensity to cause disaster), but does not necessarily imply as little experience or short exposure time as newbie and is not as derogatory as luser. Both a novice user and someone using a computer system for a long time without any understanding of the internals can be referred to as chainiks.
It is a widespread term in Russian hackish, often used in an English context by Russian-speaking hackers especially in Israel (e.g. "Our new colleague is a complete chainik"). FidoNet discussion groups often had a "chainik" subsection for newbies and old chainiks (e.g. SU.CHAINIK, RU.LINUX.CHAINIK). Public projects often have a chainik mailing list to keep the chainiks out the developers' and experienced users' discussions. Today, the word is slowly slipping into mainstream Russian due to the Russian translation of the popular For Dummies series which uses "chainik" for "dummy".
Term can also apply to novice mountaineers, backpackers, drivers, etc., with such usage predating the usage in computing context.
Some suggest the term is derived from a Russian folk custom to make a gift of a hollow thing – e.g., a pitted pumpkin, a kettle, or a teapot – to unsuccessful matchmakers of an aspiring groom rejected by a bride. The unlucky groom was mockingly called chainik. Over time the term entered other usages for unlucky, inept, or newbie people.
References
Beginners and newcomers
Russian words and phrases
Computer jargon | Chainik | Technology | 375 |
65,220,528 | https://en.wikipedia.org/wiki/Transcription-translation%20coupling | Transcription-translation coupling is a mechanism of gene expression regulation in which synthesis of an mRNA (transcription) is affected by its concurrent decoding (translation). In prokaryotes, mRNAs are translated while they are transcribed. This allows communication between RNA polymerase, the multisubunit enzyme that catalyzes transcription, and the ribosome, which catalyzes translation. Coupling involves both direct physical interactions between RNA polymerase and the ribosome ("expressome" complexes), as well as ribosome-induced changes to the structure and accessibility of the intervening mRNA that affect transcription ("attenuation" and "polarity").
Significance
Bacteria depend on transcription-translation coupling for genome integrity, termination of transcription and control of mRNA stability. Consequently, artificial disruption of transcription-translation coupling impairs the fitness of bacteria. Without coupling, genome integrity is compromised as stalled transcription complexes interfere with DNA replication and induce DNA breaks. Lack of coupling produces premature transcription termination, likely due to increased binding of termination factor Rho. Degradation of prokaryotic mRNAs is accelerated by loss of coupled translation due to increased availability of target sites of RNase E. It has also been suggested that coupling of transcription with translation is an important mechanism of preventing formation of deleterious R-loops. While transcription-translation coupling is likely prevalent across prokaryotic organisms, not all species are dependent on it. Unlike Escherichia coli, in Bacillus subtilis transcription significantly outpaces translation, and coupling consequently does not occur.
Mechanisms
Translation promotes transcription elongation and regulates transcription termination. Functional coupling between transcription and translation is caused by direct physical interactions between the ribosome and RNA polymerase ("expressome complex"), ribosome-dependent changes to nascent mRNA secondary structure which affect RNA polymerase activity (e.g. "attenuation"), and ribosome-dependent changes to nascent mRNA availability to transcription termination factor Rho ("polarity").
Expressome complex
The expressome is a supramolecular complex consisting of RNA polymerase and a trailing ribosome linked by a shared mRNA transcript. It is supported by the transcription factors NusG and NusA, which interact with both RNA polymerase and the ribosome to couple the complexes together. When coupled by transcription factor NusG, the ribosome binds newly synthesized mRNA and prevents formation of secondary structures that inhibit transcription. Formation of an expressome complex also aids transcription elongation by the trailing ribosome opposing back-tracking of RNA polymerase. Three-dimensional models of ribosome-RNA polymerase expressome complexes have been determined by cryo-electron microscopy.
Ribosome-mediated attenuation
Ribosome-mediated attenuation is a gene expression mechanism in which a transcriptional termination signal is regulated by translation. Attenuation occurs at the start of some prokaryotic operons at sequences called "attenuators", which have been identified in operons encoding amino acid biosynthesis enzymes, pyrimidine biosynthesis enzymes and antibiotic resistance factors. The attenuator functions via a set of mRNA sequence elements that coordinate the status of translation to a transcription termination signal:
A short open reading frame encoding a "leader peptide"
A transcription pause sequence
A "control region"
A transcription termination signal
Once the start of the leader open reading frame has been transcribed, RNA polymerase pauses due to folding of the nascent mRNA. This programmed arrest of transcription gives time for translation of the leader peptide to commence, and transcription to resume once coupled to translation. The downstream "control region" then modulates the elongation rate of either the ribosome or RNA polymerase. The factor determining this depends on the function of the downstream genes (e.g. the operon encoding enzymes involved in the synthesis of histidine contains a series of histidine codons is the control region). The role of the control region is to modulate whether transcription remains coupled to translation depending on the cellular state (e.g. a low availability of histidine slows translation leading to uncoupling, while high availability of histidine permits efficient translation and maintains coupling). Finally, the transcription terminator sequence is transcribed. Whether transcription is coupled to translation determines whether this stops transcription. The terminator requires folding of the mRNA, and by unwinding mRNA structures the ribosome elects the formation of either of two alternative structures: the terminator, or a competing fold termed the "antiterminator".
For amino acid biosynthesis operons, these allow the gene expression machinery to sense the abundance of the amino acid produced by the encoded enzymes, and adjust the level of downstream gene expression accordingly: transcription occurring only if the amino acid abundance is low and the demand for the enzymes is therefore high. Examples include the histidine (his) and tryptophan (trp) biosynthetic operons.
The term "attenuation" was introduced to describe the his operon. While it is typically used to describe biosynthesis operons of amino acids and other metabolites, programmed transcription termination that does not occur at the end of a gene was first identified in λ phage. The discovery of attenuation was significant as it represented a regulatory mechanism distinct from repression. The trp operon is regulated by both attenuation and repression, and was the first evidence that gene expression regulation mechanisms can be overlapping or redundant.
Polarity
"Polarity" is a gene expression mechanism in which transcription terminates prematurely due to a loss of coupling between transcription and translation. Transcription outpaces translation when the ribosome pauses or encounters a premature stop codon. This allows the transcription termination factor Rho to bind the mRNA and terminate mRNA synthesis. Consequently, genes that are downstream in the operon are not transcribed, and therefore not expressed. Polarity serves as mRNA quality control, allowing unused transcripts to be terminated prematurely, rather than synthesized and degraded.
The term "polarity" was introduced to describe the observation that the order of genes within an operon is important: a nonsense mutation within an upstream gene effects the transcription of downstream genes. Furthermore, the position of the nonsense mutation within the upstream gene modulates the "degree of polarity", with nonsense mutations at the start of the upstream genes exerting stronger polarity (more reduced transcription) on downstream genes.
Unlike the mechanism of attenuation, which involves intrinsic termination of transcription at well-defined programmed sites, polarity is Rho-dependent and termination occurs at variable position.
Discovery
The potential for transcription and translation to regulate each other was recognized by the team of Marshall Nirenberg, who discovered that the processes are physically connected through the formation of a DNA-ribosome complex. As part of the efforts of Nirenberg's group to determine the genetic code that underlies protein synthesis, they pioneered the use of cell-free in vitro protein synthesis reactions. Analysis of these reactions revealed that protein synthesis is mRNA-dependent, and that the sequence of the mRNA strictly defines the sequence of the protein product. For this work in breaking in the genetic code, Nirenberg was jointly awarded the Nobel Prize in Physiology or Medicine in 1968. Having established that transcription and translation are linked biochemically (translation depends on the product of transcription), an outstanding question remained whether they were linked physically - whether the newly synthesized mRNA released from the DNA before it is translated, or if can translation occur concurrently with transcription. Electron micrographs of stained cell-free protein synthesis reactions revealed branched assemblies in which strings of ribosomes are linked to a central DNA fibre. DNA isolated from bacterial cells co-sediment with ribosomes, further supporting the conclusion that transcription and translation occur together. Direct contact between ribosomes and RNA polymerase are observable within these early micrographs. The potential for simultaneous regulation of transcription and translation at this junction was noted in Nirenberg's work as early as 1964.
References
Gene expression
RNA | Transcription-translation coupling | Chemistry,Biology | 1,656 |
2,182,059 | https://en.wikipedia.org/wiki/Z%C3%B6llner%20illusion | The Zöllner illusion is an optical illusion named after its discoverer, German astrophysicist Johann Karl Friedrich Zöllner. In 1860, Zöllner sent his discovery in a letter to physicist and scholar Johann Christian Poggendorff, editor of Annalen der Physik und Chemie, who subsequently discovered the related Poggendorff illusion in Zöllner's original drawing.
One depiction of the illusion consists of a series of parallel, black diagonal lines which are crossed with short, repeating lines, the direction of the crossing lines alternating between horizontal and vertical. This creates the illusion that the black lines are not parallel. The shorter lines are on an angle to the longer lines, and this angle helps to create the impression that one end of the longer lines is nearer to the viewer than the other end. This is similar to the way the Wundt illusion appears. It may be that the Zöllner illusion is caused by this impression of depth.
This illusion is similar to the Hering illusion, Poggendorff illusion, Müller-Lyer illusion, and Café wall illusion. All these illusions demonstrate how lines can seem to be distorted by their background.
References
External links
A demonstration of the Zöllner illusion that allows for adjusting the angle of the shorter lines
Optical illusions | Zöllner illusion | Physics | 265 |
24,472,842 | https://en.wikipedia.org/wiki/Mycena%20sanguinolenta | Mycena sanguinolenta, commonly known as the bleeding bonnet, the smaller bleeding Mycena, or the terrestrial bleeding Mycena, is a species of mushroom in the family Mycenaceae. It is a common and widely distributed species, and has been found in North America, Europe, Australia, and Asia. The fungus produces reddish-brown to reddish-purple fruit bodies with conic to bell-shaped caps up to wide held by slender stipes up to high. When fresh, the fruit bodies will "bleed" a dark reddish-purple sap. The similar Mycena haematopus is larger, and grows on decaying wood, usually in clumps. M. sanguinolenta contains alkaloid pigments that are unique to the species, may produce an antifungal compound, and is bioluminescent. The edibility of the mushroom has not been determined.
Taxonomy
First called Agaricus sanguinolentus by Johannes Baptista von Albertini, the species was transferred to the genus Mycena in 1871 by German Paul Kummer, when he raised many of Fries' "tribes" to the rank of genus. The specific epithet is derived from the Latin word sanguinolentus and means "bloody". It is commonly known as the "bleeding bonnet" the "smaller bleeding Mycena", or the "terrestrial bleeding Mycena".
The fungus is classified in the section Lactipedes along with other latex-producing species. A molecular phylogenetic analysis of several dozen European Mycena species suggests that M. sanguinolenta is closely related to . Other phylogenically related species include and .
Description
The cap of M. sanguinolenta is either convex or conic when young, with its margin pressed against the stipe. As it expands, it becomes broadly convex or bell-shaped, ultimately reaching a diameter of . The surface is initially covered with a dense whitish-grayish coating or powder that is produced by delicate microscopic cells, but these cells soon collapse and disappear, leaving the surface naked and smooth. The surface is moist with an opaque margin that soon developing furrows. The cap color is variable but always some shade of bright or dull reddish brown with a dull grayish-brown margin. The flesh is thin, not very fragile, sordid reddish, and exudes a reddish latex when cut. The odor and taste are not distinctive.
The gills are adnate or slightly toothed, and well-spaced. They are narrow to moderately broad, sordid reddish to grayish, with even edges that are dark reddish brown. The stipe is long, 1–1.5 mm thick, equal in width throughout, and fragile. The base of the stipe is covered with coarse, stiff white hairs, while the remainder is covered with a drab powder that soon sloughs off to leave the stipe polished, and more or less the same color as the cap. It also exudes a bright or dull-red juice when cut or broken. The edibility of the mushroom is unknown—but it is considered too insubstantial to be of culinary interest.
The spores are 8–10 by 4–5 μm, roughly ellipsoid, and only weakly amyloid. The basidia (spore-bearing cells) four-spored (occasionally two- or three-spored). The pleurocystidia (cystidia on the face of a gill) are rare to scattered or sometimes quite abundant, narrowly to broadly ventricose, measuring 36–54 by 8–13 μm. They are filled with a sordid-reddish substance. The cheilocystidia (cystidia on the gill edge) are similar to the pleurocystidia or shorter and more obese, and very abundant. The flesh if the gill is made of broad hyphae the cells of which are often vesiculose (covered with vesicles) in age, and stain pale reddish brown in iodine. The flesh of the cap is covered with a thin pellicle, and the hypoderm (the layer of cells immediately underneath the pellicle) is moderately well-differentiated. The remainder of the cap flesh is floccose and filamentous, and all except the pellicle stain pale vinaceous-brown in iodine. Lactiferous (latex-producing) hyphae are abundant.
Similar species
The other "bleeding Mycena" () is readily distinguished from M. sanguinolenta by its larger size, different color, growth on rotting wood, and presence of a sterile band of tissue on the margin of the cap. Further, M. sanguinolenta consistently has red-edged gills, while the gill edges of M. haematopus are more variable. The similarly named has red to orange juice, is slightly yellower, and does not have pleurocystidia. has a similar furrowed cap, but also has a tough stipe and does not ooze liquid when injured. Mycena specialist Alexander H. Smith has noted a "striking" resemblance to , but this species has different colors (pale vinaceous brown or sordid brown when faded), produces uncolored latex, and does not have differently-colored gill edges.
Distribution and habitat
Mycena sanguinolenta is common and widely distributed. It has been found from Maine to Washington and south to North Carolina and California in the United States, and from Nova Scotia to British Columbia in Canada. In Jamaica, it has been collected at an elevation of . The distribution includes Europe (Britain, Germany, The Netherlands, Norway, Romania and Sweden) and Australia. In Asia, it has been collected from the alpine zone of the Changbai Mountains in Jilin Province, China, and from the provinces of Ōmi and Yamashiro in Japan.
The fruit bodies grow in groups on leaf mold, moss beds, or needle carpets during the spring and fall. It is common in forests of fir and beech, and prefers to grow in soil of high acidity.
Chemistry
The fruit bodies of Mycena sanguinolenta contain the blue alkaloid pigments, sanguinones A and B, unique to this species. It also has the red-colored alkaloid sanguinolentaquinone. The sanguinones are structurally related to mycenarubin A, made by M. rosea, and the discorhabins, a series of compounds produced by marine sponges. Although the function of the sanguinones is not known, it has been suggested that they may have "an ecological role ... beyond their contribution to the color of the fruiting bodies, ... since predators rarely feed on fruiting bodies". When grown in pure culture in the laboratory, the fungus produces the antifungal compound hydroxystrobilurin-D. M. sanguinolenta is one of over 30 Mycena species that is bioluminous.
See also
List of bioluminescent fungi
References
Cited text
Bioluminescent fungi
sanguinolenta
Fungi described in 1805
Fungi of Asia
Fungi of Europe
Fungi of North America
Taxa named by Lewis David de Schweinitz
Taxa named by Johannes Baptista von Albertini
Fungus species | Mycena sanguinolenta | Biology | 1,506 |
11,821,770 | https://en.wikipedia.org/wiki/Signature-tagged%20mutagenesis | Signature-tagged mutagenesis (STM) is a genetic technique used to study gene function. Recent advances in genome sequencing have allowed us to catalogue a large variety of organisms' genomes, but the function of the genes they contain is still largely unknown. Using STM, the function of the product of a particular gene can be inferred by disabling it and observing the effect on the organism. The original and most common use of STM is to discover which genes in a pathogen are involved in virulence in its host, to aid the development of new medical therapies/drugs.
Basic premise
The gene in question is inactivated by insertional mutation; a transposon is used which inserts itself into the gene sequence. When that gene is transcribed and translated into a protein, the insertion of the transposon affects the protein structure and (in theory) prevents it from functioning. In STM, mutants are created by random transposon insertion and each transposon contains a different 'tag' sequence that uniquely identifies it. If an insertional mutant bacterium exhibits a phenotype of interest, such as susceptibility to an antibiotic it was previously resistant to, its genome can be sequenced and searched (using a computer) for any of the tags used in the experiment. When a tag is located, the gene that it disrupts is also thus located (it will reside somewhere between a start and stop codon which mark the boundaries of the gene).
STM can be used to discover which genes are critical to a pathogen's virulence by injecting a 'pool' of different random mutants into an animal model (e.g. a mouse infection model) and observing which of the mutants survive and proliferate in the host. Those mutant pathogens that don't survive in the host must have an inactivated gene, required for virulence. Hence, this is an example of a negative selection method.
References
Genetics
Mutagenesis | Signature-tagged mutagenesis | Biology | 408 |
222,390 | https://en.wikipedia.org/wiki/Table%20of%20prime%20factors | The tables contain the prime factorization of the natural numbers from 1 to 1000.
When n is a prime number, the prime factorization is just n itself, written in bold below.
The number 1 is called a unit. It has no prime factors and is neither prime nor composite.
Properties
Many properties of a natural number n can be seen or directly computed from the prime factorization of n.
The multiplicity of a prime factor p of n is the largest exponent m for which pm divides n. The tables show the multiplicity for each prime factor. If no exponent is written then the multiplicity is 1 (since p = p1). The multiplicity of a prime which does not divide n may be called 0 or may be considered undefined.
Ω(n), the prime omega function, is the number of prime factors of n counted with multiplicity (so it is the sum of all prime factor multiplicities).
A prime number has Ω(n) = 1. The first: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37 . There are many special types of prime numbers.
A composite number has Ω(n) > 1. The first: 4, 6, 8, 9, 10, 12, 14, 15, 16, 18, 20, 21 . All numbers above 1 are either prime or composite. 1 is neither.
A semiprime has Ω(n) = 2 (so it is composite). The first: 4, 6, 9, 10, 14, 15, 21, 22, 25, 26, 33, 34 .
A k-almost prime (for a natural number k) has Ω(n) = k (so it is composite if k > 1).
An even number has the prime factor 2. The first: 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24 .
An odd number does not have the prime factor 2. The first: 1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23 . All integers are either even or odd.
A square has even multiplicity for all prime factors (it is of the form a2 for some a). The first: 1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144 .
A cube has all multiplicities divisible by 3 (it is of the form a3 for some a). The first: 1, 8, 27, 64, 125, 216, 343, 512, 729, 1000, 1331, 1728 .
A perfect power has a common divisor m > 1 for all multiplicities (it is of the form am for some a > 1 and m > 1). The first: 4, 8, 9, 16, 25, 27, 32, 36, 49, 64, 81, 100 . 1 is sometimes included.
A powerful number (also called squareful) has multiplicity above 1 for all prime factors. The first: 1, 4, 8, 9, 16, 25, 27, 32, 36, 49, 64, 72 .
A prime power has only one prime factor. The first: 2, 3, 4, 5, 7, 8, 9, 11, 13, 16, 17, 19 . 1 is sometimes included.
An Achilles number is powerful but not a perfect power. The first: 72, 108, 200, 288, 392, 432, 500, 648, 675, 800, 864, 968 .
A square-free integer has no prime factor with multiplicity above 1. The first: 1, 2, 3, 5, 6, 7, 10, 11, 13, 14, 15, 17 . A number where some but not all prime factors have multiplicity above 1 is neither square-free nor squareful.
The Liouville function λ(n) is 1 if Ω(n) is even, and is -1 if Ω(n) is odd.
The Möbius function μ(n) is 0 if n is not square-free. Otherwise μ(n) is 1 if Ω(n) is even, and is −1 if Ω(n) is odd.
A sphenic number has Ω(n) = 3 and is square-free (so it is the product of 3 distinct primes). The first: 30, 42, 66, 70, 78, 102, 105, 110, 114, 130, 138, 154 .
a0(n) is the sum of primes dividing n, counted with multiplicity. It is an additive function.
A Ruth-Aaron pair is two consecutive numbers (x, x+1) with a0(x) = a0(x+1). The first (by x value): 5, 8, 15, 77, 125, 714, 948, 1330, 1520, 1862, 2491, 3248 . Another definition is where the same prime is only counted once; if so, the first (by x value): 5, 24, 49, 77, 104, 153, 369, 492, 714, 1682, 2107, 2299 .
A primorial x# is the product of all primes from 2 to x. The first: 2, 6, 30, 210, 2310, 30030, 510510, 9699690, 223092870, 6469693230, 200560490130, 7420738134810 . 1# = 1 is sometimes included.
A factorial x! is the product of all numbers from 1 to x. The first: 1, 2, 6, 24, 120, 720, 5040, 40320, 362880, 3628800, 39916800, 479001600 . 0! = 1 is sometimes included.
A k-smooth number (for a natural number k) has its prime factors ≤ k (so it is also j-smooth for any j > k).
m is smoother than n if the largest prime factor of m is below the largest of n.
A regular number has no prime factor above 5 (so it is 5-smooth). The first: 1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16 .
A k-powersmooth number has all pm ≤ k where p is a prime factor with multiplicity m.
A frugal number has more digits than the number of digits in its prime factorization (when written like the tables below with multiplicities above 1 as exponents). The first in decimal: 125, 128, 243, 256, 343, 512, 625, 729, 1024, 1029, 1215, 1250 .
An equidigital number has the same number of digits as its prime factorization. The first in decimal: 1, 2, 3, 5, 7, 10, 11, 13, 14, 15, 16, 17 .
An extravagant number has fewer digits than its prime factorization. The first in decimal: 4, 6, 8, 9, 12, 18, 20, 22, 24, 26, 28, 30 .
An economical number has been defined as a frugal number, but also as a number that is either frugal or equidigital.
gcd(m, n) (greatest common divisor of m and n) is the product of all prime factors which are both in m and n (with the smallest multiplicity for m and n).
m and n are coprime (also called relatively prime) if gcd(m, n) = 1 (meaning they have no common prime factor).
lcm(m, n) (least common multiple of m and n) is the product of all prime factors of m or n (with the largest multiplicity for m or n).
gcd(m, n) × lcm(m, n) = m × n. Finding the prime factors is often harder than computing gcd and lcm using other algorithms which do not require known prime factorization.
m is a divisor of n (also called m divides n, or n is divisible by m) if all prime factors of m have at least the same multiplicity in n.
The divisors of n are all products of some or all prime factors of n (including the empty product 1 of no prime factors).
The number of divisors can be computed by increasing all multiplicities by 1 and then multiplying them.
Divisors and properties related to divisors are shown in table of divisors.
1 to 100
101 to 200
201 to 300
301 to 400
401 to 500
501 to 600
601 to 700
701 to 800
801 to 900
901 to 1000
See also
Prime numbers
Elementary number theory
Mathematics-related lists
Mathematical tables
Number-related lists | Table of prime factors | Mathematics | 1,883 |
17,956,189 | https://en.wikipedia.org/wiki/Foreign%20Reports | Foreign Reports Inc. is a Washington, D.C.–based consulting firm for the oil industry, founded in 1956. Foreign Reports advises energy companies, governments, and financial institutions on world energy issues, with a specialization on the Middle East. The president of the firm is Nathaniel Kern.
Overview
Foreign Reports has been in this business for more than 50 years and counts among its subscribers many of the world's largest oil companies—both international and national—as well as many other financial institutions. It reports on political developments that are highly relevant to oil markets, crude oil price formation, and related macroeconomic variables.
Methods
In providing political intelligence and analysis of world oil markets, Foreign Reports uses three tools:
Focus: Its Reports focus on political questions that impact oil markets. They are brief (rarely more than two pages), single-topic reports and are transmitted three to five times a week to its clients. They filter out the mass of extraneous intelligence which accumulates every day in today's world.
Facts: The Reports do more than focus on what is truly relevant; they are also the product of intensive efforts to go to difficult-to-access sources to find out what political decisions are being made and why they are being made. Sources are more comfortable and more open in talking with Foreign Reports because they know that their identities will be protected and that the information they provide will receive highly limited circulation.
Analysis: Frequently the focus and the facts are not enough to predict uncertain outcomes. Foreign Reports uses its expertise and experience in world oil markets to create a focused product that is relevant to these markets.
History
Foreign Reports was founded in 1956 by Harry Kern, who had previously been foreign editor of Newsweek, in which capacity he traveled extensively throughout the world, but especially in the Far and Middle East.
Newsweek and Time during that period were practically the sole elements of the U.S. news media reporting on world activities in a timely fashion. As foreign editor, Harry Kern also was editor-in-chief of the magazine's International Edition and thus had the privilege of picking who or what would adorn the cover of those editions. Since various foreign leaders, or aspiring ones, angled to get their pictures on the front of Newsweek, Kern was a popular visitor in many foreign capitals. In the process, he managed to befriend both current and future leaders and to gain insights into how their policies were developed.
Foreign Reports grew out of these unique circumstances, as Kern saw a need among growing multinational companies with sizable stakes around the world for a level of international political reporting that surpassed what was then being carried in the daily newspapers of the period. From Newsweek, he brought with him to Foreign Reports two bureau chiefs, one in Beirut and one in Tokyo. From these "bureaus" of Foreign Reports came a steady stream of insightful reporting on the regions they covered. Among its initial major subscribers were the world's major oil companies, but also other industrial and banking concerns.
Oil crises
In the year of its founding, Foreign Reports benefited from one of the first oil crises that have afflicted the Middle East over the years—the 1956 Suez Crisis, with its concomitant closure of the Suez Canal, which was a great boon to notable tanker owners of the time, who were avid clients of Foreign Reports.
Since that time, Foreign Reports has closely covered for its subscribers all the major and minor crises that have bedeviled world oil markets ever since, as well as the broad geopolitical trends that have affected markets and business conditions. The methods it uses to anticipate the unanticipated are relatively straightforward and avoid being unduly alarmist. They are methods that have been refined over time.
Most every crisis begins with a series of rumbles, and the rumbles have to be distinguished from mere bluster and bombast. Knowing who the players are, how they think, what they confide in others, their history of risk-taking and their own domestic political requirements is essential. As any potential crisis builds, often over a period of months, Foreign Reports writes up a contemporaneous narrative, covering the story as it develops, often focusing on key details, which, only later, historians pick up on and piece together.
Foreign Reports and the Middle East
The Middle East, with its vast reserves of petroleum, was an obvious early focus of Foreign Reports, especially as the firm's subscribers had substantial equity interests in oil concessions in that volatile part of the world, where Kern remained a frequent visitor to many of the key players—the Shah of Iran, Gamal Abdul Nasser of revolutionary Egypt, Crown Prince Faisal of Saudi Arabia, etc. Kern also maintained close relationships with the leading foreign policy actors in the Eisenhower administration, notably Secretary of State John Foster Dulles and his brother, CIA Director Allen Dulles, forging a long relationship with U.S. intelligence, both in Washington and in the agency's foreign "stations".
Nathaniel Kern (also Nat Kern) joined his father at Foreign Reports in 1972 after graduating from Princeton University and attending the University of Riyadh in 1970 and 1971 as the first non-Arab student. By the time he graduated and joined the firm, rumblings of the first full-scale "energy crisis" had begun and the role of Saudi Arabia on the world scene began to be transformed.
Within two years of Nat's joining the firm, the world of oil and the Middle East had changed dramatically, with prices skyrocketing and the volumes of crude oil being produced in Saudi Arabia growing steadily. The firm's business branched out from providing political reporting on oil in the Middle East into also providing business development assistance to firms wishing to break into new markets in the Middle East, primarily, though not exclusively, in Saudi Arabia. The main areas the firm concentrated in were competitive bidding opportunities in the power and desalination markets. This required an understanding of the technologies, engineering and procurement issues inherent in complex projects, and Foreign Reports brought on board the necessary skilled individuals in these areas.
Nat Kern was a frequent visitor to Iraq during the 1980–1988 Iran-Iraq war, at a time when U.S.-Iraqi relations were improving, and was tasked by the U.S. government with maintaining ties with certain key Iraqi officials from 1991 onwards, at a time when the U.S. government maintained a policy of shunning any official contact with the Iraqi government.
Changing realities of the oil market
By the early 1980s, the nature of the world oil business began to change in a number of different ways, all of which affected how Foreign Reports would be able to continue to provide services to its client base. The major international oil companies were gradually losing their equity ownership of Middle East oil production and many needed to forge different kinds of relationships with producing governments. In addition, a new class of players in the oil market was gradually emerging as interest and liquidity grew in the futures market. World oil prices had been practically a secret in the early days of Foreign Reports and had been remarkably stable in general during the firm's first 16 years, but it would be another ten years before price volatility would become a major reason for the firm to develop another service for its clients.
OPEC did not institute its first quotas until 1982, just as crude oil prices were beginning to come under downward pressure in the market. When prices did eventually start to crash in late November 1985, no other reporting service in the industry had so closely chronicled how that crash would materialize as Foreign Reports had done. The firm had watched intensely how then Saudi Petroleum Minister Ahmed Zaki Yamani had wrestled over new ways to price Saudi Arabia's oil as he cruised the Mediterranean on his yacht during August 1985. Foreign Reports was the first to report that Yamani, just before that Labor Day, had got off his yacht and signed "net-back pricing deals" with his main international customers. These deals that would cut all previous supports for crude oil prices and lead prices from the high $20s to the single digits within nine months. Incredibly, in those early days of the NYMEX, futures prices did not start to decline until the day after Thanksgiving.
As the pace and sophistication of NYMEX trading has accelerated greatly since those days, and as access to the incredible amounts of information over the internet has exploded, the services that Foreign Reports has offered have also changed, while still staying with time-tested methods: follow the narrative; know the actors; know their characters; understand the rules; understand cultures and histories; pay ever increasing attention to separating the wheat from the chaff in an information-laden age; and communicate concisely and clearly.
Current work
Foreign Reports continues to report on political developments that are highly relevant to oil markets, crude oil price formation, and related macroeconomic variables. It closely monitors and reports on the political and economic situations in places such as: Iraq, Iran, Saudi Arabia, Nigeria, and Venezuela. The firm also reports on OPEC politics and examines what oil production decisions might be looming in the near future. Executive and legislative activities in the U.S. which affect world oil markets are also often reported on.
Iraq: Many of the world's major oil companies currently rely on Foreign Reports for their understanding of Iraq's political events, the status of its oil industry, and the broad trends which appear to be shaping the future of the country. With the security situation in Iraq not yet suitable for international oil companies to have much of a physical presence there, and because Western news agencies have a limited number of journalists stationed throughout the country, many energy companies, governments, and financial institutions rely extensively on the political reporting and analysis done by Foreign Reports.
Iran: The political and economic events in Iran are often discussed, yet rarely understood. Foreign Reports has shown an ability to look beyond the bluster and bombast coming out of Iran, in an attempt to understand what is really making things work in its domestic and international affairs. Recent reports have analyzed President Ahmadinejad's unique economic policies; Iranian involvement in Iraq; the effects of sanctions and the trajectory of the nuclear file; the Iranian risk premium; Iran's winter gas crisis; and Iran's inability to sell the heavy sour crude from its Nowruz and Soroush fields.
Saudi Arabia: With extensive experience and contacts in Saudi Arabia, Foreign Reports is able to report with confidence on the country with the largest oil reserves, highest level of oil production, and the highest spare production capacity. Political and economic decisions made by the Kingdom are critical to world oil markets, and Foreign Reports closely follows these developments. Recent reports have analyzed Saudi oil policies; the Saudi viewpoint on oil prices; the development of the 500,000 barrels per day project soon to be brought online from the Kingdom's Khursaniyah field; and the 1.2 million barrels per day project at the Khurais field expected to be brought online in June 2009.
Sources
Foreign Reports Inc. Homepage
Saudi-American Relations
External links
Scarborough Country' for April 2". April 5, 2004. MSNBC transcript.
CSIS Report: "Saudi Arabia's Upstream and Downstream Expansion Plans for the Next Decade: A Saudi Perspective", By: Nathaniel Kern and Nawaf Obeid
Petroleum industry | Foreign Reports | Chemistry | 2,288 |
53,621,281 | https://en.wikipedia.org/wiki/Artificial%20cerebrospinal%20fluid | Artificial cerebrospinal fluid (aCSF) is a buffer solution prepared with a composition representative of cerebrospinal fluid that is used experimentally to immerse isolated brains, brain slices, or exposed brain regions to supply oxygen, maintain osmolarity, and to buffer pH at biological levels. ACSF is commonly used for electrophysiology experiments to maintain the neurons that are being studied.
Composition
One protocol for electrophysiology recording suggests the following composition for aCSF, with the pH and oxygen level stabilized by bubbling with carbogen (95% O and 5% CO):
127 mM NaCl
1.0 mM KCl
1.2 mM KH2PO4
26 mM NaHCO3
10 mM D-glucose
2.4 mM CaCl2
1.3 mM MgCl2
References
Neurophysiology
Electrophysiology
Laboratory techniques | Artificial cerebrospinal fluid | Chemistry | 181 |
11,436,434 | https://en.wikipedia.org/wiki/Cercospora%20brachypus | Cercospora brachypus is a fungal plant pathogen.
References
brachypus
Fungal plant pathogens and diseases
Taxa named by Benjamin Matlack Everhart
Fungi described in 1902
Fungus species | Cercospora brachypus | Biology | 40 |
34,121,965 | https://en.wikipedia.org/wiki/Freudenthal%20spectral%20theorem | In mathematics, the Freudenthal spectral theorem is a result in Riesz space theory proved by Hans Freudenthal in 1936. It roughly states that any element dominated by a positive element in a Riesz space with the principal projection property can in a sense be approximated uniformly by simple functions.
Numerous well-known results may be derived from the Freudenthal spectral theorem. The well-known Radon–Nikodym theorem, the validity of the Poisson formula and the spectral theorem from the theory of normal operators can all be shown to follow as special cases of the Freudenthal spectral theorem.
Statement
Let e be any positive element in a Riesz space E. A positive element of p in E is called a component of e if . If are pairwise disjoint components of e, any real linear combination of is called an e-simple function.
The Freudenthal spectral theorem states: Let E be any Riesz space with the principal projection property and e any positive element in E. Then for any element f in the principal ideal generated by e, there exist sequences and of e-simple functions, such that is monotone increasing and converges e-uniformly to f, and is monotone decreasing and converges e-uniformly to f.
Relation to the Radon–Nikodym theorem
Let be a measure space and the real space of signed -additive measures on . It can be shown that is a Dedekind complete Banach Lattice with the total variation norm, and hence has the principal projection property. For any positive measure , -simple functions (as defined above) can be shown to correspond exactly to -measurable simple functions on (in the usual sense). Moreover, since by the Freudenthal spectral theorem, any measure in the band generated by can be monotonously approximated from below by -measurable simple functions on , by Lebesgue's monotone convergence theorem can be shown to correspond to an function and establishes an isometric lattice isomorphism between the band generated by and the Banach Lattice .
See also
Radon–Nikodym theorem
References
Theorems in functional analysis | Freudenthal spectral theorem | Mathematics | 435 |
13,118,658 | https://en.wikipedia.org/wiki/Farmland%20preservation | Farmland preservation is a joint effort by non-governmental organizations and local governments to set aside and protect examples of a region's farmland for the use, education, and enjoyment of future generations. They are operated mostly at state and local levels by government agencies or private entities such as land trusts and are designed to limit conversion of agricultural land to other uses that otherwise might have been more financially attractive to the land owner. Through different government programs and policy enactments farmers are able to preserve their land for growing crops and raising livestock. Every state provides tax relief through differential (preferential) assessment. Easements are a popular approach and allow the farms to remain operational. Less common approaches include establishing agricultural districts, using zoning to protect agricultural land, purchasing development rights, and transferable development rights. It is often a part of regional planning and national historic preservation. Farmland preservation efforts have been taking place across the United States, such as in Virginia, Minnesota, Maryland, Florida, and Connecticut.
History
New Jersey passed the Farmland Assessment Act of 1964 to mitigate the loss of farmland to rapid suburban development through the use of favorable tax assessments. The act dealt with how the land is assessed for taxes based on the productivity level of the land. The thinking behind this act was that by helping cut the taxes on the farmland, local farmers would be more likely to stay in business. But by the late 1970s, the value of farmland had outstripped the tax benefits of the act, so the state purchased deed restrictions on farms through the Agriculture Retention and Development Act of 1981. Through the Agriculture Retention and Development Act of 1981, the State of New Jersey to purchase the easements along the farms thus preventing the construction and rezoning of these areas into industrial, commercial, residential, and/or otherwise areas. Per the State, as of 2022, the act has helped save some 2,800 farms amassing 247,517 acres. Regional efforts in Monmouth County, New Jersey include the Navesink Highlands Greenway, a project of the Monmouth County Farmland Preservation Program, which, along with the Monmouth Conservation Foundation, purchased the development rights of the Holly Crest Farm in Middletown in September 2008 for US$2.5 million. Over 20 percent of county farmlands and open spaces are permanently preserved. This area is delineated as a land-trust which means that the land itself is publicly owned, so when purchasing a home, the purchaser is buying the building itself and also entering a long-term lease with the land-owning entity. The land trust covers some 665 acres and covers a variety of rural to urban communities. Managed by an executive board and a board of trustees, they make decisions regarding land-use keeping in mind the principles of conservation and sustainability.
American Farmland Trust (AFT) was established in 1980 to preserve farmland and promote sustainable farming practices. Since its inception, the AFT has grown to be one of the largest farmland conservation groups in the nation boasting 7.1 million acres protected and $117 billion accrued through their efforts. Their goal in doing all of this is that they will be able to keep farmers on the land by supporting them economically so the farmers can also adopt more conservation minded farming practices. By adopting conservation minded practices, they believe in the long-term success of the farms promoting a healthy ecosystem and water table that also produces adequate amounts of crops for the growing world population. The Genesee Valley Conservancy was founded in New York in 1990. The Genesee Valley Conservancy is a public land trust in western New York. The land-trust includes some 32,787 acres along the Genesee watershed. The scope of the project is to protect habitats, open spaces, and farmland. In order to protect the land in the valley, they hope to add a nature preserve and expand upon existing ones. In order to increase their visibility and community understanding within the region they plan on providing educational and recreational opportunities. In order to increase and diversify funding they plan on outlining specific plans for funding of projects in hope that having defined goals will help bring on new support.
Farmland management
Conservation easement is one approach used to manage protected farms. There are different government programs that invest in conservation easement of farmlands, one program is run by the U.S. Department of Agriculture. It is known by the abbreviation ACEP, which stands for the Agricultural Conservation Easement Program. The implementation of this program is to maintain current farms to preserve existing land for only agriculture uses. Through this program it protects grazing interests of livestock and the health of the land for the growing of agriculture. Groups through is program ca then be supplied funds to purchase Agriculture Land Easements. ACEP can provide funds to different groups who want to preserve land such as non-profit organizations, local governments and Indian tribes.
A transferable development rights program offers landowners financial incentives or bonuses for the conservation and maintenance of agricultural land. Land developers can purchase the development rights of certain properties within a designated "sending district" and transfer the rights to another "receiving district" to increase the density of their new development. It provides more land for growing crops because it zones off plots of land to be growth zones. This allows for farmlands to not be sold for development and be preserved for agricultural.
In addition, to different government programs there has also been several bills passed to preserve farmland. The Farmland Protection Policy Act (FPPA) was a policy based to make sure federal funded programs do not develop land designated for crop growing or other agriculture purposes. The FPPA protects farmland from federal funded construction and other government projects that require the acquisition of property.
There are many programs for farmland management, there are also government programs to help the land that the crops grow on. The Farm Service Agency (FSA) trades a year rental payment to remove certain parts of the land in order for the soil to improve. This program helps farmers take care of their land by getting support to help improve water and soil quality. It manages the health of the farmland.
Preservation efforts
Virginia
In 2019, Virginia's Office of Farmland Preservation allocated matching funds to local programs that purchased development rights of farmland. That year, the program was able to preserve 14,163.99 acres by matching $12,085,163.61 to funds raised by local programs. The Elsing Green is a 2,254 acre historic, Colonial Virginia plantation that granted the Virginia Historic Landmarks Commission a preservation easement in 1980. This easement will ensure that the green and its surrounding areas will be protected from demolition, inappropriate development, and any future commercial development. The green was also placed on the Virginia Landmarks Registry (VLR) on May 13, 1969. Similarly, the Oatlands Historical House and Gardens is a 263-acre plantation that was donated to the National Trust for Historic Preservation by the daughters of its final owner. The site was designated as a National Historic Landmark by the National Park Service.
Maryland
The largest of this land was located in Kent County, where 1,365 acres where preserved through $5,850,144.98 worth of easements. The Hampton Historical Site is a 63-acre preservation that includes the historic Hampton Mansion, gardens, historic farm buildings, slave quarters, and a family cemetery. In the face of suburban expansion and farming becoming less viable, the Ridgely family decided to sell the remaining property to the National Park Service. The site was restored before reopening in 1950. In 2023, the Maryland Agricultural Land Preservation Foundation was able to permanently preserve 4,600 acres of farmland by using $16,76732.23 in easements.
Minnesota
The Minnesota Land Trust has been able to preserve approximately 79,421 acres that span across 698 projects. The largest deal made by the trust was in 2021, with the purchase of 4 parcels valued at $4.2 million. Upon purchase, the land was donated to St. Louis County, who will manage the land for recreation, wildlife, and sustainable time harvest. In 2024, the Krueger Christmas Tree Farm completed easements that would preserve 36 of the 46 acres on their farm.
Florida
In 2023, Florida Agriculture Commissioner Wilton Simpson helped secure $300 million in funding for the Rural and Family Lands Protection Program (RFLPP). The program intends to provide funds easements for these farms, which in turn serve as a buffer to the Florida Wildlife Corridor. The Department of Agriculture released rankings of 257 farms and placed Trailhead Blue Springs at first, which is a 12,098 acre cow ranch. The largest of these project is the Adams Ranch in Osceola County, which is 24,027 acres and is used for cattle production. In a joint effort by Conservation Florida and the Natural Resources Conservation Service, easements were placed on the XL Ranch Lightsey Cove that helped protect 527 acres along the Florida Wildlife Corridor. This ranch is also located within the Avon Park Air Force Range Sentinel Landscape, which covers nearly 1.7 million acres and homes parts of the Everglades Headwaters National Wildlife Refuge and Conservation Area.
Connecticut
In Connecticut, the Farmland Preservation Program has preserved over 45,300 acres of land across 373 farms. This includes the historic Maple Bank Farm, which was offered an easement to preserve 51 of the 80 acres that the farm is found on. In 2009, the program engaged in a three-way deal with the Connecticut Department of Agriculture and the United States Department of Agriculture to preserve Winsneke Farm. The deal was the first in the history of the program to include the State of Connecticut, a land trust, and a federal agency. After a 3-year application process, the town of Southington and the Farmland Preservation Program split the purchase of development rights to Karabin Farms in 2021. This purchase protected over 1,000 acres in farmland and ensured that the farm would still remain operational.
Partial list of preserved farms
Elsing Green
Hampton National Historic Site
Oatlands Plantation
See also
Agricultural Land Reserve
Development-supported agriculture
Environmental Conservation Acreage Reserve Program
Preservation development
Sunderland, Massachusetts#Housing and development
Farm Agriculture Department of Agriculture (United States)
References
External links
American Farmland Trust
Information about Farming https://www.usda.gov/topics/farming
Map of Farmlands in America https://www.nass.usda.gov/Charts_and_Maps/Farms_and_Land_in_Farms/index.php
Human geography
Land use
Urban planning | Farmland preservation | Engineering,Environmental_science | 2,103 |
97,315 | https://en.wikipedia.org/wiki/Heh%20%28god%29 | Ḥeḥ (ḥḥ, also Huh, Hah, Hauh, Huah, and Hehu) was the personification of infinity or eternity in the Ogdoad in ancient Egyptian religion. His name originally meant "flood", referring to the watery chaos Nu that the Egyptians believed existed before the creation of the world. The Egyptians envisioned this chaos as infinite, in contrast with the finite created world, so Heh personified this aspect of the primordial waters. Heh's female counterpart and consort was known as Hauhet, which is simply the feminine form of his name.
Like the other concepts in the Ogdoad, his male form was often depicted as a frog, or a frog-headed human, and his female form as a snake or snake-headed human. The frog head symbolised fertility, creation, and regeneration, and was also possessed by the other Ogdoad males Kek, Amun, and Nun. The other common representation depicts him crouching, holding a palm stem in each hand (or just one), sometimes with a palm stem in his hair, as palm stems represented long life to the Egyptians, the years being represented by notches on it. Depictions of this form also had a shen ring at the base of each palm stem, which represented infinity. Depictions of Heh were also used in hieroglyphs to represent one million, which was essentially considered equivalent to infinity in Ancient Egyptian mathematics. Thus this deity is also known as the "god of millions of years".
Origins and mythology
The primary meaning of the Egyptian word ḥeḥ was "million" or "millions"; a personification of this concept, Ḥeḥ, was adopted as the Egyptian god of infinity. With his female counterpart Ḥauḥet (or Ḥeḥut), Ḥeḥ represented one of the four god-goddess pairs comprising the Ogdoad, a pantheon of eight primeval deities whose worship was centred at Hermopolis Magna.
The mythology of the Ogdoad describes its eight members, Heh and Hauhet, Nu and Naunet, Amun and Amaunet, and Kuk and Kauket, coming together in the cataclysmic event that gives rise to the sun (and its deific personification, Atum).
Heh sometimes helps Shu, a god associated with air, in supporting the sky goddess Nut. In the Book of the Heavenly Cow, eight Heh gods are depicted together with Shu supporting Nut, who has taken the form of a cow.
Forms and iconography
The god Ḥeḥ was usually depicted anthropomorphically, as in the hieroglyphic character, as a male figure with divine beard and lappet wig. Normally kneeling (one knee raised), sometimes in a basket—the sign for "all", the god typically holds in each hand a notched palm branch (palm rib). (These were employed in the temples for ceremonial time-keeping, which use explains the use of the palm branch as the hieroglyphic symbol for rnp.t, "year"). Occasionally, an additional palm branch is worn on the god's head.
In Ancient Egyptian Numerology, Gods such as Heh were used to represent numbers in a decimal point system. Particularly, the number 1,000,000 is depicted in the hieroglyph of Heh, who is in his normal seated position.
Cult and worship
The personified, somewhat abstract god of eternity Ḥeḥ possessed no known cult centre or sanctuary; rather, his veneration revolved around symbolism and personal belief. The god's image and its iconographic elements reflected the wish for millions of years of life or rule; as such, the figure of Ḥeḥ finds frequent representation in amulets, prestige items and royal iconography from the late Old Kingdom period onwards. Heh became associated with the King and his quest for longevity. For instance, he appears on the tomb of King Tutankhamen, in two cartouches, where he is crowned with a winged scarab beetle, symbolizing existence and a sun disk. The placement of Heh in relation to King Tutankhamen's corpse means he will be granting him these "millions of years" into the afterlife.
Gallery
See also
Renpet
Bibliography
References
Egyptian gods
Time and fate gods
Time and fate goddesses
Infinity
Piscine and amphibian humanoids
Snake gods
Sky supporters | Heh (god) | Mathematics | 903 |
444,763 | https://en.wikipedia.org/wiki/300%20%28number%29 | 300 (three hundred) is the natural number following 299 and preceding 301.
In Mathematics
300 is a composite number and the 24th triangular number.
Integers from 301 to 399
300s
301
302
303
304
305
306
307
308
309
310s
310
311
312
313
314
315
315 = 32 × 5 × 7 = , rencontres number, highly composite odd number, having 12 divisors.
316
316 = 22 × 79, a centered triangular number and a centered heptagonal number.
317
317 is a prime number, Eisenstein prime with no imaginary part, Chen prime, one of the rare primes to be both right and left-truncatable, and a strictly non-palindromic number.
317 is the exponent (and number of ones) in the fourth base-10 repunit prime.
318
319
319 = 11 × 29. 319 is the sum of three consecutive primes (103 + 107 + 109), Smith number, cannot be represented as the sum of fewer than 19 fourth powers, happy number in base 10
320s
320
320 = 26 × 5 = (25) × (2 × 5). 320 is a Leyland number, and maximum determinant of a 10 by 10 matrix of zeros and ones.
321
321 = 3 × 107, a Delannoy number
322
322 = 2 × 7 × 23. 322 is a sphenic, nontotient, untouchable, and a Lucas number. It is also the first unprimeable number to end in 2.
323
323 = 17 × 19. 323 is the sum of nine consecutive primes (19 + 23 + 29 + 31 + 37 + 41 + 43 + 47 + 53), the sum of the 13 consecutive primes (5 + 7 + 11 + 13 + 17 + 19 + 23 + 29 + 31 + 37 + 41 + 43 + 47), Motzkin number. A Lucas and Fibonacci pseudoprime. See 323 (disambiguation)
324
324 = 22 × 34 = 182. 324 is the sum of four consecutive primes (73 + 79 + 83 + 89), totient sum of the first 32 integers, a square number, and an untouchable number.
325
326
326 = 2 × 163. 326 is a nontotient, noncototient, and an untouchable number. 326 is the sum of the 14 consecutive primes (3 + 5 + 7 + 11 + 13 + 17 + 19 + 23 + 29 + 31 + 37 + 41 + 43 + 47), lazy caterer number
327
327 = 3 × 109. 327 is a perfect totient number, number of compositions of 10 whose run-lengths are either weakly increasing or weakly decreasing
328
328 = 23 × 41. 328 is a refactorable number, and it is the sum of the first fifteen primes (2 + 3 + 5 + 7 + 11 + 13 + 17 + 19 + 23 + 29 + 31 + 37 + 41 + 43 + 47).
329
329 = 7 × 47. 329 is the sum of three consecutive primes (107 + 109 + 113), and a highly cototient number.
330s
330
330 = 2 × 3 × 5 × 11. 330 is sum of six consecutive primes (43 + 47 + 53 + 59 + 61 + 67), pentatope number (and hence a binomial coefficient ), a pentagonal number, divisible by the number of primes below it, and a sparsely totient number.
331
331 is a prime number, super-prime, cuban prime, a lucky prime, sum of five consecutive primes (59 + 61 + 67 + 71 + 73), centered pentagonal number, centered hexagonal number, and Mertens function returns 0.
332
332 = 22 × 83, Mertens function returns 0.
333
333 = 32 × 37, Mertens function returns 0; repdigit; 2333 is the smallest power of two greater than a googol.
334
334 = 2 × 167, nontotient.
335
335 = 5 × 67. 335 is divisible by the number of primes below it, number of Lyndon words of length 12.
336
336 = 24 × 3 × 7, untouchable number, number of partitions of 41 into prime parts, largely composite number.
337
337, prime number, emirp, permutable prime with 373 and 733, Chen prime, star number
338
338 = 2 × 132, nontotient, number of square (0,1)-matrices without zero rows and with exactly 4 entries equal to 1.
339
339 = 3 × 113, Ulam number
340s
340
340 = 22 × 5 × 17, sum of eight consecutive primes (29 + 31 + 37 + 41 + 43 + 47 + 53 + 59), sum of ten consecutive primes (17 + 19 + 23 + 29 + 31 + 37 + 41 + 43 + 47 + 53), sum of the first four powers of 4 (41 + 42 + 43 + 44), divisible by the number of primes below it, nontotient, noncototient. Number of regions formed by drawing the line segments connecting any two of the 12 perimeter points of a 3 times 3 grid of squares and .
341
342
342 = 2 × 32 × 19, pronic number, Untouchable number.
343
343 = 73, the first nice Friedman number that is composite since 343 = (3 + 4)3. It is the only known example of x2+x+1 = y3, in this case, x=18, y=7. It is z3 in a triplet (x,y,z) such that x5 + y2 = z3.
344
344 = 23 × 43, octahedral number, noncototient, totient sum of the first 33 integers, refactorable number.
345
345 = 3 × 5 × 23, sphenic number, idoneal number
346
346 = 2 × 173, Smith number, noncototient.
347
347 is a prime number, emirp, safe prime, Eisenstein prime with no imaginary part, Chen prime, Friedman prime since 347 = 73 + 4, twin prime with 349, and a strictly non-palindromic number.
348
348 = 22 × 3 × 29, sum of four consecutive primes (79 + 83 + 89 + 97), refactorable number.
349
349, prime number, twin prime, lucky prime, sum of three consecutive primes (109 + 113 + 127), 5349 - 4349 is a prime number.
350s
350
350 = 2 × 52 × 7 = , primitive semiperfect number, divisible by the number of primes below it, nontotient, a truncated icosahedron of frequency 6 has 350 hexagonal faces and 12 pentagonal faces.
351
351 = 33 × 13, 26th triangular number, sum of five consecutive primes (61 + 67 + 71 + 73 + 79), member of Padovan sequence and number of compositions of 15 into distinct parts.
352
352 = 25 × 11, the number of n-Queens Problem solutions for n = 9. It is the sum of two consecutive primes (173 + 179), lazy caterer number
353
354
354 = 2 × 3 × 59 = 14 + 24 + 34 + 44, sphenic number, nontotient, also SMTP code meaning start of mail input. It is also sum of absolute value of the coefficients of Conway's polynomial.
355
355 = 5 × 71, Smith number, Mertens function returns 0, divisible by the number of primes below it. The cototient of 355 is 75, where 75 is the product of its digits (3 x 5 x 5 = 75).
The numerator of the best simplified rational approximation of pi having a denominator of four digits or fewer. This fraction (355/113) is known as Milü and provides an extremely accurate approximation for pi, being accurate to seven digits.
356
356 = 22 × 89, Mertens function returns 0.
357
357 = 3 × 7 × 17, sphenic number.
358
358 = 2 × 179, sum of six consecutive primes (47 + 53 + 59 + 61 + 67 + 71), Mertens function returns 0, number of ways to partition {1,2,3,4,5} and then partition each cell (block) into subcells.
359
360s
360
361
361 = 192. 361 is a centered triangular number, centered octagonal number, centered decagonal number, member of the Mian–Chowla sequence; also the number of positions on a standard 19 x 19 Go board.
362
362 = 2 × 181 = σ2(19): sum of squares of divisors of 19, Mertens function returns 0, nontotient, noncototient.
363
364
364 = 22 × 7 × 13, tetrahedral number, sum of twelve consecutive primes (11 + 13 + 17 + 19 + 23 + 29 + 31 + 37 + 41 + 43 + 47 + 53), Mertens function returns 0, nontotient.
It is a repdigit in base 3 (111111), base 9 (444), base 25 (EE), base 27 (DD), base 51 (77) and base 90 (44), the sum of six consecutive powers of 3 (1 + 3 + 9 + 27 + 81 + 243), and because it is the twelfth non-zero tetrahedral number.
365
366
366 = 2 × 3 × 61, sphenic number, Mertens function returns 0, noncototient, number of complete partitions of 20, 26-gonal and 123-gonal. Also the number of days in a leap year.
367
367 is a prime number, a lucky prime, Perrin number, happy number, prime index prime and a strictly non-palindromic number.
368
368 = 24 × 23. It is also a Leyland number.
369
370s
370
370 = 2 × 5 × 37, sphenic number, sum of four consecutive primes (83 + 89 + 97 + 101), nontotient, with 369 part of a Ruth–Aaron pair with only distinct prime factors counted, Base 10 Armstrong number since 33 + 73 + 03 = 370.
371
371 = 7 × 53, sum of three consecutive primes (113 + 127 + 131), sum of seven consecutive primes (41 + 43 + 47 + 53 + 59 + 61 + 67), sum of the primes from its least to its greatest prime factor, the next such composite number is 2935561623745, Armstrong number since 33 + 73 + 13 = 371.
372
372 = 22 × 3 × 31, sum of eight consecutive primes (31 + 37 + 41 + 43 + 47 + 53 + 59 + 61), noncototient, untouchable number, --> refactorable number.
373
373, prime number, balanced prime, one of the rare primes to be both right and left-truncatable (two-sided prime), sum of five consecutive primes (67 + 71 + 73 + 79 + 83), sexy prime with 367 and 379, permutable prime with 337 and 733, palindromic prime in 3 consecutive bases: 5658 = 4549 = 37310 and also in base 4: 113114.
374
374 = 2 × 11 × 17, sphenic number, nontotient, 3744 + 1 is prime.
375
375 = 3 × 53, number of regions in regular 11-gon with all diagonals drawn.
376
376 = 23 × 47, pentagonal number, 1-automorphic number, nontotient, refactorable number. There is a math puzzle in which when 376 is squared, 376 is also the last three digits, as 376 * 376 = 141376 It is one of the two three-digit numbers where when squared, the last three digits remain the same.
377
377 = 13 × 29, Fibonacci number, a centered octahedral number, a Lucas and Fibonacci pseudoprime, the sum of the squares of the first six primes.
378
378 = 2 × 33 × 7, 27th triangular number, cake number, hexagonal number, Smith number.
379
379 is a prime number, Chen prime, lazy caterer number and a happy number in base 10. It is the sum of the first 15 odd primes (3 + 5 + 7 + 11 + 13 + 17 + 19 + 23 + 29 + 31 + 37 + 41 + 43 + 47 + 53). 379! - 1 is prime.
380s
380
380 = 22 × 5 × 19, pronic number, number of regions into which a figure made up of a row of 6 adjacent congruent rectangles is divided upon drawing diagonals of all possible rectangles.
381
381 = 3 × 127, palindromic in base 2 and base 8.
381 is the sum of the first 16 prime numbers (2 + 3 + 5 + 7 + 11 + 13 + 17 + 19 + 23 + 29 + 31 + 37 + 41 + 43 + 47 + 53).
382
382 = 2 × 191, sum of ten consecutive primes (19 + 23 + 29 + 31 + 37 + 41 + 43 + 47 + 53 + 59), Smith number.
383
383, prime number, safe prime, Woodall prime, Thabit number, Eisenstein prime with no imaginary part, palindromic prime. It is also the first number where the sum of a prime and the reversal of the prime is also a prime. 4383 - 3383 is prime.
384
385
385 = 5 × 7 × 11, sphenic number, square pyramidal number, the number of integer partitions of 18.
385 = 102 + 92 + 82 + 72 + 62 + 52 + 42 + 32 + 22 + 12
386
386 = 2 × 193, nontotient, noncototient, centered heptagonal number, number of surface points on a cube with edge-length 9.
387
387 = 32 × 43, number of graphical partitions of 22.
388
388 = 22 × 97 = solution to postage stamp problem with 6 stamps and 6 denominations, number of uniform rooted trees with 10 nodes.
389
389, prime number, emirp, Eisenstein prime with no imaginary part, Chen prime, highly cototient number, strictly non-palindromic number. Smallest conductor of a rank 2 Elliptic curve.
390s
390
390 = 2 × 3 × 5 × 13, sum of four consecutive primes (89 + 97 + 101 + 103), nontotient,
is prime
391
391 = 17 × 23, Smith number, centered pentagonal number.
392
392 = 23 × 72, Achilles number.
393
393 = 3 × 131, Blum integer, Mertens function returns 0.
394
394 = 2 × 197 = S5 a Schröder number, nontotient, noncototient.
395
395 = 5 × 79, sum of three consecutive primes (127 + 131 + 137), sum of five consecutive primes (71 + 73 + 79 + 83 + 89), number of (unordered, unlabeled) rooted trimmed trees with 11 nodes.
396
396 = 22 × 32 × 11, sum of twin primes (197 + 199), totient sum of the first 36 integers, refactorable number, Harshad number, digit-reassembly number.
397
397, prime number, cuban prime, centered hexagonal number.
398
398 = 2 × 199, nontotient.
is prime
399
399 = 3 × 7 × 19, sphenic number, smallest Lucas–Carmichael number, and a Leyland number of the second kind 399! + 1 is prime.
References
Integers | 300 (number) | Mathematics | 3,406 |
1,228,836 | https://en.wikipedia.org/wiki/Cleanser | The term cleanser refers to a product that cleans or removes dirt or other substances. A cleanser could be a detergent, and there are many types of cleansers that are produced with a specific objective or focus. For instance, a degreaser or carburetor cleanser used in automotive mechanics for cleaning certain engine and car parts.
Other varieties include the ones used in cosmetology, dermatology or general skin care. In this case, a cleanser is a facial care product that is used to remove make-up, skin care product residue, microbes, dead skin cells, oils, sweat, dirt and other types of daily pollutants from the face. These washing aids help prevent filth-accumulation, infections, clogged pores, irritation and cosmetic issues like dullness from dead skin buildup & excessive skin shine from sebum buildup. This can also aid in preventing or treating certain skin conditions; such as acne. Cleansing is the first step in a skin care regimen and can be used in addition of a toner and moisturizer, following cleansing or using makeup remover cotton and makeup remover.
Sometimes "double cleansing" before moving on to any other skincare product is encouraged to ensure the full dissolution & removal of residues that might be more resistant to cleansing, such as; waterproof makeup, water-resistant sunscreen, the excess sebum of oily skin-type individuals and air pollution particles. Double cleansing usually involves applying a lipid-soluble cleanser (e.g. cleansing balm, cleansing oil, micellar cleansing water) to dry skin and massaging it around the face for a length of time, then the area may or may not be splashed with water. Any type of aqueous cleanser is then emulsified with water and used as the main cleanser that removes the first cleanser and further cleans the skin. Then the face is finally thoroughly rinsed with water until no filth or product residue remains.
Using a cleanser designated for the facial skin to remove dirt is considered to be a better alternative to bar soap or another form of skin cleanser not specifically formulated for the face for the following reasons:
Bar soap has an alkaline pH (in the area of 9 to 10), and the pH of a healthy skin surface is around 4.7 on average. This means that soap can change the balance present in the skin to favor the overgrowth of some types of bacteria, increasing acne. In order to maintain a healthy pH balance and skin health, your skin must sit on the proper pH level; some individuals who use bar soap choose to use pH-balancing toners after cleaning in attempts to compensate for the alkalinity of their soaps.
Bar cleansers have thickeners that allow them to assume a bar shape. These thickeners can clog pores, which may lead to pimples in susceptible individuals. Wet dry shampoos, face wash and body washes are often labeled as "bar cleansers" because they have thickeners that allow them to assume a bar shape. These thickeners can clog pores, which may lead to pimples in susceptible individuals.
Using bar soap on the face can remove natural oils from the skin that form a barrier against water loss. This causes the sebaceous glands to subsequently overproduce oil, a condition known as reactive seborrhoea, which will lead to clogged pores. In order to prevent drying out the skin, many cleansers incorporate moisturizers.
Facial cleansers
Facial cleansers include the following:
Balm cleansers
Bar cleansers
Clay cleansers
Cold cream cleansers
Creamy cleansers
Exfoliant/Scrub cleansers
Foam/Foaming cleansers
Gel/Jelly cleansers
Lotion cleansers
Micellar cleansers
Milky cleansers
Oil cleansers
Powder cleansers
Treatment/Medicated cleansers (aloe vera, benzoyl peroxide, carboxylic acids, charcoal, colloidal oatmeal, honey, sulphur, vitamin c, lighteners)
Tool cleansers (cotton rounds, konjac sponges, microfiber cloths, mitts, silicone brushes, spinning brushes, sponges, towelettes/wipes)
Cleansers that have active ingredients are more suitable for oily skins to prevent breakouts. But they may overdry and irritate dry skin, this may make the skin appear and feel worse. Dehydrated skin may require a creamy lotion-type cleanser. These are normally too gentle to be effective on oily or even normal skin, but dry skin requires much less cleansing power. It may be a good idea to select a cleanser that is alcohol-free for use on dry, sensitive, or dehydrated skin.
Some cleansers may incorporate fragrance or essential oils. However, for some people, these cleansers may irritate the skin and often provoke allergic responses. People with such sensitivity should find cleansers that are pH-balanced cosmetic balanced, contain fewer irritants, suit many variating skin types, and do not make the skin feel dehydrated directly after cleansing. Tight, uncomfortable skin is often dehydrated and may appear shiny after cleansing, even when no sebum is present. This is due to the taughtening and 'stripping' effect some cleaners can have on the skin. One should discontinue use of a cleanser that upsets the balance of the skin; cleansers should work with the skin not against it. Finding the right cleanser can involve some trial-and-error.
References
Skin care
Cleaning products
Personal hygiene products | Cleanser | Chemistry | 1,196 |
16,096,622 | https://en.wikipedia.org/wiki/Aluminium-26 | Aluminium-26 (26Al, Al-26) is a radioactive isotope of the chemical element aluminium, decaying by either positron emission or electron capture to stable magnesium-26. The half-life of 26Al is 717,000 years. This is far too short for the isotope to survive as a primordial nuclide, but a small amount of it is produced by collisions of atoms with cosmic ray protons.
Decay of aluminium-26 also produces gamma rays and x-rays. The x-rays and Auger electrons are emitted by the excited atomic shell of the daughter 26Mg after the electron capture which typically leaves a hole in one of the lower sub-shells.
Because it is radioactive, it is typically stored behind at least of lead. Contact with 26Al may result in radiological contamination. This necessitates special tools for transfer, use, and storage.
Dating
Aluminium-26 can be used to calculate the terrestrial age of meteorites and comets. It is produced in significant quantities in extraterrestrial objects via spallation of silicon alongside beryllium-10, though after falling to Earth, 26Al production ceases and its abundance relative to other cosmogenic nuclides decreases. Absence of aluminium-26 sources on Earth is a consequence of Earth's atmosphere obstructing silicon on the surface and low troposphere from interaction with cosmic rays. Consequently, the amount of 26Al in the sample can be used to calculate the date the meteorite fell to Earth.
Occurrence in the interstellar medium
The gamma ray emission from the decay of aluminium-26 at 1809 keV was the first observed gamma emission from the Galactic Center. The observation was made by the HEAO-3 satellite in 1984.
26Al is mainly produced in supernovae ejecting many radioactive nuclides in the interstellar medium. The isotope is believed to be crucial for the evolution of planetary objects, providing enough heat to melt and differentiate accreting planetesimals. This is known to have happened during the early history of the asteroids 1 Ceres and 4 Vesta. 26Al has been hypothesized to have played a role in the unusual shape of Saturn's moon Iapetus. Iapetus is noticeably flattened and oblate, indicating that it rotated significantly faster early in its history, with a rotation period possibly as short as 17 hours. Heating from 26Al could have provided enough heat in Iapetus to allow it to conform to this rapid rotation period, before the moon cooled and became too rigid to relax back into hydrostatic equilibrium.
The presence of aluminium monofluoride molecule as the 26Al isotopologue in CK Vulpeculae, which is an unknown type of nova, constitutes the first solid evidence of an extrasolar radioactive molecule.
Aluminium-26 in the early Solar System
In considering the known melting of small planetary bodies in the early Solar System, H. C. Urey noted that the naturally occurring long-lived radioactive nuclei (40K, 238U, 235U and 232Th) were insufficient heat sources. He proposed that the heat sources from short lived nuclei from newly formed stars might be the source and identified 26Al as the most likely choice. This proposal was made well before the general problems of stellar nucleosynthesis of the nuclei were known or understood. This conjecture was based on the discovery of 26Al in a Mg target by Simanton, Rightmire, Long & Kohman.
Their search was undertaken because hitherto there was no known radioactive isotope of Al that might be useful as a tracer. Theoretical considerations suggested that a state of 26Al should exist. The life time of 26Al was not then known; it was only estimated between 104 and 106 years. The search for 26Al took place over many years, long after the discovery of the extinct radionuclide 129I which showed that contribution from stellar sources formed ~108 years before the Sun had contributed to the Solar System mix. The asteroidal materials that provide meteorite samples were long known to be from the early Solar System.
The Allende meteorite, which fell in 1969, contained abundant calcium–aluminium-rich inclusions (CAIs). These are very refractory materials and were interpreted as being condensates from a hot solar nebula. then discovered that the oxygen in these objects was enhanced in 16O by ~5% while the 17O/18O was the same as terrestrial. This clearly showed a large effect in an abundant element that might be nuclear, possibly from a stellar source. These objects were then found to contain strontium with very low 87Sr/86Sr indicating that they were a few million years older than previously analyzed meteoritic material and that this type of material would merit a search for 26Al. 26Al is only present today in the Solar System materials as the result of cosmic reactions on unshielded materials at an extremely low level. Thus, any original 26Al in the early Solar System is now extinct.
To establish the presence of 26Al in very ancient materials requires demonstrating that samples must contain clear excesses of 26Mg/24Mg which correlates with the ratio of 27Al/24Mg. The stable 27Al is then a surrogate for extinct 26Al. The different 27Al/24Mg ratios are coupled to different chemical phases in a sample and are the result of normal chemical separation processes associated with the growth of the crystals in the CAIs. Clear evidence of the presence of 26Al at an abundance ratio of 5×10−5 was shown by Lee et al. The value (26Al/27Al ~ 5) has now been generally established as the high value in early Solar System samples and has been generally used as a refined time scale chronometer for the early Solar System. Lower values imply a more recent time of formation. If this 26Al is the result of pre-solar stellar sources, then this implies a close connection in time between the formation of the Solar System and the production in some exploding star. Many materials which had been presumed to be very early (e.g. chondrules) appear to have formed a few million years later. Other extinct radioactive nuclei, which clearly had a stellar origin, were then being discovered.
That 26Al was present in the interstellar medium as a major gamma ray source was not explored until the development of the high-energy astronomical observatory program. The HEAO-3 spacecraft with cooled Ge detectors allowed the clear detection of 1.808 MeV gamma lines from the central part of the galaxy from a distributed 26Al source. This represents a quasi steady state inventory corresponding to two solar masses of 26Al was distributed. This discovery was greatly expanded on by observations from the Compton Gamma Ray Observatory using the COMPTEL telescope in the galaxy. Subsequently, the 60Fe lines (1.173 MeV and 1.333 Mev) were also detected showing the relative rates of decays from 60Fe to 26Al to be 60Fe/26Al ~ 0.11.
In pursuit of the carriers of 22Ne in the sludge produced by chemical destruction of some meteorites, carrier grains in micron size, acid-resistant ultra-refractory materials (e.g. C, SiC) were found by E. Anders & the Chicago group. The carrier grains were clearly shown to be circumstellar condensates from earlier stars and often contained very large enhancements in 26Mg/24Mg from the decay of 26Al with 26Al/27Al sometimes approaching 0.2. These studies on micron scale grains were possible as a result of the development of surface ion mass spectrometry at high mass resolution with a focused beam developed by G. Slodzian & R. Castaing with the CAMECA Co.
The production of 26Al by cosmic ray interactions in unshielded materials is used as a monitor of the time of exposure to cosmic rays. The amounts are far below the initial inventory that is found in very early solar system debris.
Metastable states
Before 1954, the half-life of aluminium-26m was measured to be 6.3 seconds. After it was theorized that this could be the half-life of a metastable state (isomer) of aluminium-26, the ground state was produced by bombardment of magnesium-26 and magnesium-25 with deuterons in the cyclotron of the University of Pittsburgh. The first half-life was determined to be in the range of 106 years.
The Fermi beta decay half-life of the aluminium-26 metastable state is of interest in the experimental testing of two components of the Standard Model, namely, the conserved-vector-current hypothesis and the required unitarity of the Cabibbo–Kobayashi–Maskawa matrix. The decay is superallowed. The 2011 measurement of the half life of 26mAl is milliseconds.
See also
Isotopes of aluminium
Surface exposure dating
References
Isotopes of aluminium
Positron emitters
Radionuclides used in radiometric dating | Aluminium-26 | Chemistry | 1,848 |
70,333,497 | https://en.wikipedia.org/wiki/Brisavirus | Brisavirus (isolate LC KY052047) is a species of Redondoviridae in the genus Torbevirus. Brisa- is from the Spanish word for "Breeze", which refers to their isolation from the human respiratory tract. It was discovered in a throat swab in a male traveler who presented with fever, enlarged adenoids, flushed skin and myalgia after testing negative for other viruses. Brisavirus like other viruses in the Redondoviridae family, are present and putatively replicate in the oro-respiratory tract. They are associated in patients with critical illness and periodontitis.
Genome
Brisavirus has a CRESS DNA genome with 3 open reading frames that are inversely oriented that encode for the capsid and Replication-associated protein protein with a small and larger intergenic region. They replicate using rolling-circle replication like other CRESS viruses. By using metagenomic analysis, researchers found that the encoded proteins were most similar to porcine stool-associated circular virus 5 isolate CP33 (PoSCV5).
References
DNA viruses | Brisavirus | Biology | 223 |
26,874,357 | https://en.wikipedia.org/wiki/Nimrod%20Megiddo | Nimrod Megiddo () is a mathematician and computer scientist. He is a research scientist at the IBM Almaden Research Center and Stanford University. His interests include combinatorial optimization, algorithm design and analysis, game theory, and machine learning. He was one of the first people to propose a solution to the bounding sphere and smallest-circle problem.
Education
Megiddo received his PhD in mathematics from the Hebrew University of Jerusalem for research supervised by Michael Maschler.
Career and research
In computational geometry, Megiddo is known for his prune and search and parametric search techniques both suggested in 1983 and used for various computational geometric optimization problems, in particular to solve the smallest-circle problem in linear time. His former doctoral students include Edith Cohen.
Awards and honours
Megiddo received the 2014 John von Neumann Theory Prize, the 1992 ICS Prize, and is a 1992 Frederick W. Lanchester Prize recipient. In 2009 he received the Institute for Operations Research and the Management Sciences (INFORMS) Fellows award for contributions to the theory and application of mathematical programming, including parametric searches, interior point methods, low dimension Linear Programming, probabilistic analysis of the simplex method and computational game theory.
References
Year of birth missing (living people)
Living people
Researchers in geometric algorithms
Hebrew University of Jerusalem alumni
American computer scientists
American operations researchers
Israeli operations researchers
John von Neumann Theory Prize winners
Game theorists
Numerical analysts
Fellows of the Institute for Operations Research and the Management Sciences
Jewish scientists
Israeli systems scientists | Nimrod Megiddo | Mathematics | 302 |
52,188,910 | https://en.wikipedia.org/wiki/Mannequin%20Challenge | The Mannequin Challenge is a viral Internet video trend which became popular in November 2016. In this challenge, participants have to stay still in action like a mannequin while a moving camera films them, often with the song "Black Beatles" by Rae Sremmurd playing in the background. The hashtag #MannequinChallenge was used for popular social media platforms such as Twitter and Instagram. It is believed that the phenomenon was started by students at a high school in Jacksonville, Florida. The initial posting has inspired works by other groups, especially professional athletes and sports teams, who have posted increasingly complex and elaborate videos.
News outlets have compared the videos to bullet time scenes from science fiction films such as The Matrix, X-Men: Days of Future Past, X-Men: Apocalypse, Lost in Space or Buffalo '66. Meanwhile, the participatory nature of the challenge on social media makes it similar to memes such as Makankosappo or the Harlem Shake. Others have noted similarities with the HBO TV series Westworld, which debuted around the same time, where robotic hosts can be stopped in their tracks.
Notable instances
Sports figures
A number of notable sports teams, including both collegiate and professional, as well as sports personalities, have engaged in the challenge. Notable instances of sport-figure participation include:
Pittsburgh Steelers, in their locker room on November 4.
Penn State Football, in their locker room on November 5.
Anaheim Ducks, at their 80s center ice party on November 13.
Dallas Cowboys, on their team plane on November 6.
Buffalo Bills, on their team plane on November 6.
New York Giants, in their home locker room on November 6.
Milwaukee Bucks, on their team plane on November 6.
University of Kentucky Wildcats basketball team, in a home exhibition game against Asbury University. Thousands of attendees in the basketball arena participated.
Golden State Warriors player Steph Curry, with his wife Ayesha, posted a version done at a restaurant full of patrons.
The crew of NXT made a two-minute video that was posted on November 8.
Brigham Young University women's gymnastics team, in their gym.
US Military Academy at Westpoint men's gymnastics team.
Borussia Dortmund, of the German Bundesliga, in a weight room.
Portugal national football team with Cristiano Ronaldo in their locker room.
Manchester United players Marcus Rashford and Jesse Lingard.
Aston Villa squad
England international footballers Jamie Vardy, Raheem Sterling and Theo Walcott froze in place after scoring a second goal vs Spain in a friendly match on November 15.
Spain national football team inside the Wembley changing room after their friendly match against England.
Belgium national football team.
European Tour golfers at Jumeirah Golf Estates in Dubai
United States women's national soccer team posted a video doing the challenge on November 13.
United States gymnasts Simone Biles, Nastia Liukin and Danell Leyva took part in a video while practicing for the Kellogg's Tour of Champions on November 13.
World champion figure skaters joined the challenge on November 13 at the ISU Grand Prix of Figure Skating Trophée de France 2016/2017.
Indonesian footballer Lerby Eliandry and Thai footballer Teerasil Dangda with their respective teammates after scoring a goal in their Group Stage match of 2016 AFF Championship.
Television broadcasters who have participated include:
SEC Network sportscasters, crew and student crowd, on set November 5.
ESPN College Gameday sportscasters and student crowd, November 5.
Fox NFL Sunday sportscasters and crew, on set November 6.
CBS Evening News crew, on set November 16.
College football teams that posted videos include Old Dominion University, Temple University, University of Pittsburgh, Louisiana State University.
Artists and celebrities
Once many versions of the challenge began surfacing with Rae Sremmurd's "Black Beatles" as the background music, Rae Sremmurd paused a concert to do a Mannequin Challenge video live on stage.
Dancing With the Stars in the U.S. created a video with cast and crew on the dance floor that was posted to dancer Val Chmerkovskiy's Instagram account.
Musical artists who participated included:
The former members of Destiny's Child, Beyoncé, Michelle Williams and Kelly Rowland, created a video on November 7.
Singer Adele adopted a Western theme in a November 7 video.
Country singer Garth Brooks created a video while live on stage in a concert on November 12.
Singer Britney Spears celebrated winning, for the second year in a row, Best Resident Performer in the Best of Las Vegas Awards by posting a video on her Instagram page on November 13 of her and her dancers doing the challenge to the sound of her single "Slumber Party", taken from her album Glory.
Electronic dance music producer and DJ Marshmello recorded a video on stage in concert, with the crowd participating, at The Shrine Auditorium in Los Angeles.
Former Beatles member Paul McCartney did this challenge by standing by a piano frozen while the song is playing. He showed his support for the song and the artists in a Twitter message.
Simon Cowell, Nicole Scherzinger, Louis Walsh, Sharon Osbourne and Dermot O'Leary, along with The X Factor UK live audience and dancers participated live on 26 November 2016, during Honey G's performance of "Black Beatles".
Reaks Records, UK released the "Mannequin Challenge" EP produced by DJ AKS.
Taylor Swift participated in the challenge on Thanksgiving at a beach.
Ellen DeGeneres and Warren Beatty created a video of a table tennis game and posted it to Instagram.
The Late Late Show with James Corden created an elaborate video of more than 2 minutes and 30 seconds, that involved the entire crew, backstage area and studio audience.
Blac Chyna and Rob Kardashian created a video in the hospital delivery room.
The cast of Saturday Night Live created a video along with Kristen Wiig as a promotion for the season's November 19, 2016 episode.
First Lady Michelle Obama froze for the Mannequin Challenge together with LeBron James and the rest of the Cleveland Cavaliers during their visit to the White House to be honored for their NBA Championship victory.
On November 18, 2016, the erotic dance troupe Chippendales uploaded their Challenge on their social medias.
On November 22, actress Tracee Ellis Ross coordinated a video before the awarding of the Presidential Medal of Freedom at the White House, which included celebrities such as Ellen DeGeneres, Robert De Niro, Tom Hanks, Bill Gates, Diana Ross, Rita Wilson, Michael Jordan, Kareem Abdul-Jabbar and Frank Gehry.
On December 6, celebrity attendees at the 2016 British Fashion Awards participated in the challenge as Gigi Hadid won the award for International Model of the Year. Anna Wintour, Donatella Versace, Christopher Bailey, Franca Sozzani, Kate Moss, Yolanda Foster and Naomi Campbell were also part of it.
On December 14, the Boston Pops Orchestra and Conductor Keith Lockhart participated in the challenge during a rehearsal for the orchestra's 43rd Annual Holiday Pops Season and posted a video on YouTube.
On December 21, the crew of Food Network's Beat Bobby Flay participated in the challenge and posted a video on Facebook
Politicians
On November 7, the night before election day, Democratic candidate Hillary Clinton participated in a video with Jon Bon Jovi, Bill Clinton, Huma Abedin and various staffers on her campaign plane.
First Lady Michelle Obama posed for a video with the NBA champions Cleveland Cavaliers at the White House.
Former Managing Director of World Bank Group and current Indonesian Minister of Finance, Sri Mulyani Indrawati, did the challenge after delivering a lecture at Padjadjaran University, along with the Rector and thousands of students whom attended the lecture.
On December 13, Westmount, Quebec's city council became the first municipality to release a mannequin challenge filmed in the council chambers. The video features eight council members and the mayor frozen during a seemingly chaotic council meeting.
Activism
On November 10, film makers Simone Shepherd, Kevalena Everett and Todd Anthony made a series of videos that re-created scenes related to the Black Lives Matter movement, as a promotional teaser for their feature film Black in Blue.
In November 2016, the Revolutionaries of Syria Media Office, a Syrian media organisation, published a video showing two White Helmet volunteers performing a staged rescue operation for the meme. The organisation apologised for their volunteers' error of judgement and said it had not shared the recording on their official channels.
Legacy
Videos of the challenge uploaded to YouTube have been used to advance machine learning research in depth prediction.
Other examples
NASCAR
Alex Bowman racing team
Hendrick Motorsports
Team Lowe's Racing
Alabama Department of Corrections
German Army
Metropolitan Atlanta Rapid Transit Authority
International Conference of Chabad-Lubavitch Emissaries
Big data startup, ZoomData
Astronauts on the International Space Station
See also
Living statue
Tableau vivant
Pageant of the Masters
Bullet time
Planking (fad)
Statues (game)
Harlem Shake (meme)
References
2010s fads and trends
2016 introductions
Internet challenges
Performance art
Viral videos
Slow motion
Internet memes introduced in 2016 | Mannequin Challenge | Physics | 1,866 |
50,718,121 | https://en.wikipedia.org/wiki/Robert%20N.%20Clayton | Robert Norman Clayton (March 20, 1930 – December 30, 2017) was a Canadian-American chemist and academic. He was the Enrico Fermi Distinguished Service Professor Emeritus of Chemistry at the University of Chicago. Clayton studied cosmochemistry and held a joint appointment in the university's geophysical sciences department. He was a member of the National Academy of Sciences and was named a fellow of several academic societies, including the Royal Society.
Biography
Born in Hamilton, Ontario, Clayton grew up in a working-class family that supported (but could not pay for) his pursuit of higher education. None of Clayton's close family members had ever attended college. His high school teachers encouraged him to apply to Queen's University, and he received enough scholarship funding to attend the school. Clayton said that around half of his classmates were a decade older and had served in World War II. He said that this created a serious academic environment.
After graduating from Queen's University with undergraduate and master's degrees, Clayton completed a Ph.D. in 1955 at the California Institute of Technology, where he was mentored by geochemist Samuel Epstein. His first academic appointment was at Penn State University. In 1958, he joined the chemistry faculty at the University of Chicago, where he took over the laboratory of Nobel Prize winner Harold Urey. From 1961 to his retirement in 2001, he held joint appointments in the chemistry and geophysical sciences departments. He directed the Enrico Fermi Institute at the university from 1998 to 2001.
Research
Clayton worked in the field of cosmochemistry and is best known for the use of the stable isotopes of oxygen to classify meteorites. He was aided in his research by Toshiko Mayeda, who was a specialist technician familiar with the mass spectrometry equipment required. Their first joint research paper described the use of bromine pentafluoride to extract oxygen from rocks and minerals. They developed several tests that were used across the field of meteorite and lunar sample analysis.
Clayton and Mayeda studied variations in the ratio of oxygen-17 and oxygen-18 to the most abundant isotope oxygen-16, building on their surprising finding that this ratio for oxygen-17 in particular was different from that found in terrestrial rock samples. They deduced that this difference was caused by the formation temperature of the meteorite and could thus be used as an "oxygen thermometer". They also worked on the mass spectroscopy and chemistry of the Allende meteorite and studied the Bocaiuva meteorite, finding that the Eagle Station meteorite was formed due to impact heating.
They also analysed approximately 300 lunar samples that had been collected during NASAs Apollo Program. In 1992, a new type of meteorite, the Brachinite, was identified. Clayton and Mayeda studied the Achondrite meteorites and showed that variations in the oxygen isotope ratios within a planet are due to inhomogeneities in the solar nebula. They analysed Shergotty meteorites, proposing that there could have been a water-rich atmosphere in the past on Mars.
Honours and awards
In 1981, he received the V. M. Goldschmidt Award from the Geochemical Society. The next year, the Meteoritical Society awarded him its Leonard Medal. Clayton won the Elliott Cresson Medal from the Franklin Institute in 1985. He was the 1987 recipient of the William Bowie Medal from the American Geophysical Union. Clayton became a member of the National Academy of Sciences in 1996 and won the academy's J. Lawrence Smith Medal in 2009. Clayton has been named a fellow of the Royal Society of London (1981) and the Royal Society of Canada. He won the National Medal of Science in 2004. In 2008, the book Oxygen in the Solar System was dedicated to Clayton.
On December 30, 2017, Clayton died his sleep at his home in Indiana from complications of Parkinson's disease.
References
Further reading
1930 births
2017 deaths
Canadian chemists
Canadian fellows of the Royal Society
Fellows of the Royal Society of Canada
Members of the United States National Academy of Sciences
University of Chicago faculty
Pennsylvania State University faculty
Queen's University at Kingston alumni
California Institute of Technology alumni
National Medal of Science laureates
Scientists from Hamilton, Ontario
Recipients of the V. M. Goldschmidt Award | Robert N. Clayton | Chemistry | 865 |
8,054,686 | https://en.wikipedia.org/wiki/Printed%20electronics | Printed electronics is a set of printing methods used to create electrical devices on various substrates. Printing typically uses common printing equipment suitable for defining patterns on material, such as screen printing, flexography, gravure, offset lithography, and inkjet. By electronic-industry standards, these are low-cost processes. Electrically functional electronic or optical inks are deposited on the substrate, creating active or passive devices, such as thin film transistors, capacitors, coils, and resistors. Some researchers expect printed electronics to facilitate widespread, very low-cost, low-performance electronics for applications such as flexible displays, smart labels, decorative and animated posters, and active clothing that do not require high performance.
The term printed electronics is often related to organic electronics or plastic electronics, in which one or more inks are composed of carbon-based compounds. These other terms refer to the ink material, which can be deposited by solution-based, vacuum-based, or other processes. Printed electronics, in contrast, specifies the process, and, subject to the specific requirements of the printing process selected, can utilize any solution-based material. This includes organic semiconductors, inorganic semiconductors, metallic conductors, nanoparticles, and nanotubes. The solution usually consist of filler materials dispersed in a suitable solvent. The most commonly used solvents include ethanol, xylene, Dimethylformamide (DMF),Dimethyl sulfoxide (DMSO), toluene and water, whereas, the most common conductive fillers include silver nanoparticles, silver flakes, carbon black, graphene, carbon nanotubes, conductive polymers (such as polyaniline and polypyrrole), and metal powders (such as copper or nickel). Considering the environmental impacts of the organic solvents, researchers are now focused on developing printable inks using water.
For the preparation of printed electronics nearly all industrial printing methods are employed. Similar to conventional printing, printed electronics applies ink layers one atop another. So the coherent development of printing methods and ink materials are the field's essential tasks.
The most important benefit of printing is low-cost volume fabrication. The lower cost enables use in more applications. An example is RFID-systems, which enable contactless identification in trade and transport. In some domains, such as light-emitting diodes printing does not impact performance. Printing on flexible substrates allows electronics to be placed on curved surfaces, for example: printing solar cells on vehicle roofs. More typically, conventional semiconductors justify their much higher costs by providing much higher performance.
Resolution, registration, thickness, holes, materials
The maximum required resolution of structures in conventional printing is determined by the human eye. Feature sizes smaller than approximately 20 μm cannot be distinguished by the human eye and consequently exceed the capabilities of conventional printing processes. In contrast, higher resolution and smaller structures are necessary in most electronics printing, because they directly affect circuit density and functionality (especially transistors). A similar requirement holds for the precision with which layers are printed on top of each other (layer to layer registration).
Control of thickness, holes, and material compatibility (wetting, adhesion, solubility) are essential, but matter in conventional printing only if the eye can detect them. Conversely, the visual impression is irrelevant for printed electronics.
Printing technologies
The attraction of printing technology for the fabrication of electronics mainly results from the possibility of preparing stacks of micro-structured layers (and thereby thin-film devices) in a much simpler and cost-effective way compared to conventional electronics. Also, the ability to implement new or improved functionalities (e.g. mechanical flexibility) plays a role. The selection of the printing method used is determined by requirements concerning printed layers, by the properties of printed materials as well as economic and technical considerations of the final printed products.
Printing technologies divide between sheet-based and roll-to-roll-based approaches. Sheet-based inkjet and screen printing are best for low-volume, high-precision work. Gravure, offset and flexographic printing are more common for high-volume production, such as solar cells, reaching 10,000 square meters per hour (m2/h). While offset and flexographic printing are mainly used for inorganic and organic conductors (the latter also for dielectrics), gravure printing is especially suitable for quality-sensitive layers like organic semiconductors and semiconductor/dielectric-interfaces in transistors, due to high layer quality. If high resolution is needed, gravure is also suitable for inorganic and organic conductors. Organic field-effect transistors and integrated circuits can be prepared completely by means of mass-printing methods.
Inkjet printing
Inkjets are flexible and versatile, and can be set up with relatively low effort. However, inkjets offer lower throughput of around 100 m2/h and lower resolution (ca. 50 μm). It is well suited for low-viscosity, soluble materials like organic semiconductors. With high-viscosity materials, like organic dielectrics, and dispersed particles, like inorganic metal inks, difficulties due to nozzle clogging occur. Because ink is deposited via droplets, thickness and dispersion homogeneity is reduced. Using many nozzles simultaneously and pre-structuring the substrate allows improvements in productivity and resolution, respectively. However, in the latter case non-printing methods must be employed for the actual patterning step. Inkjet printing is preferable for organic semiconductors in organic field-effect transistors (OFETs) and organic light-emitting diodes (OLEDs), but also OFETs completely prepared by this method have been demonstrated. Frontplanes and backplanes of OLED-displays, integrated circuits, organic photovoltaic cells (OPVCs) and other devices can be prepared with inkjets.
Screen printing
Screen printing is appropriate for fabricating electrics and electronics due to its ability to produce patterned, thick layers from paste-like materials. This method can produce conducting lines from inorganic materials (e.g. for circuit boards and antennas), but also insulating and passivating layers, whereby layer thickness is more important than high resolution. Its 50 m2/h throughput and 100 μm resolution are similar to inkjets. This versatile and comparatively simple method is used mainly for conductive and dielectric layers, but also organic semiconductors, e.g. for OPVCs, and even complete OFETs can be printed.
Aerosol jet printing
Aerosol Jet Printing (also known as Maskless Mesoscale Materials Deposition or M3D) is another material deposition technology for printed electronics. The Aerosol Jet process begins with atomization of an ink, via ultrasonic or pneumatic means, producing droplets on the order of one to two micrometers in diameter. The droplets then flow through a virtual impactor which deflects the droplets having lower momentum away from the stream. This step helps maintaining a tight droplet size distribution. The droplets are entrained in a gas stream and delivered to the print head. Here, an annular flow of clean gas is introduced around the aerosol stream to focus the droplets into a tightly collimated beam of material. The combined gas streams exit the print head through a converging nozzle that compresses the aerosol stream to a diameter as small as 10 μm. The jet of droplets exits the print head at high velocity (~50 meters/second) and impinges upon the substrate.
Electrical interconnects, passive and active components are formed by moving the print head, equipped with a mechanical stop/start shutter, relative to the substrate. The resulting patterns can have features ranging from 10 μm wide, with layer thicknesses from tens of nanometers to >10 μm. A wide nozzle print head enables efficient patterning of millimeter size electronic features and surface coating applications. All printing occurs without the use of vacuum or pressure chambers. The high exit velocity of the jet enables a relatively large separation between the print head and the substrate, typically 2–5 mm. The droplets remain tightly focused over this distance, resulting in the ability to print conformal patterns over three dimensional substrates.
Despite the high velocity, the printing process is gentle; substrate damage does not occur and there is generally minimal splatter or overspray from the droplets. Once patterning is complete, the printed ink typically requires post treatment to attain final electrical and mechanical properties. Post-treatment is driven more by the specific ink and substrate combination than by the printing process. A wide range of materials has been successfully deposited with the Aerosol Jet process, including diluted thick film pastes, conducting polymer inks, thermosetting polymers such as UV-curable epoxies, and solvent-based polymers like polyurethane and polyimide, and biologic materials.
Recently, printing paper was proposed to be used as the substrate of the printing. Highly conductive (close to bulk copper) and high-resolution traces can be printed on foldable and available office printing papers, with 80°Celsius curing temperature and 40 minutes of curing time.
Evaporation printing
Evaporation printing uses a combination of high precision screen printing with material vaporization to print features to 5 μm. This method uses techniques such as thermal, e-beam, sputter and other traditional production technologies to deposit materials through a high precision shadow mask (or stencil) that is registered to the substrate to better than 1 μm. By layering different mask designs and/or adjusting materials, reliable, cost-effective circuits can be built additively, without the use of photo-lithography.
Other methods
Other methods with similarities to printing, among them microcontact printing and nano-imprint lithography are of interest. Here, μm- and nm-sized layers, respectively, are prepared by methods similar to stamping with soft and hard forms, respectively. Often the actual structures are prepared subtractively, e.g. by deposition of etch masks or by lift-off processes. For example, electrodes for OFETs can be prepared. Sporadically pad printing is used in a similar manner. Occasionally so-called transfer methods, where solid layers are transferred from a carrier to the substrate, are considered printed electronics. Electrophotography is currently not used in printed electronics.
Materials
Both organic and inorganic materials are used for printed electronics. Ink materials must be available in liquid form, for solution, dispersion or suspension. They must function as conductors, semiconductors, dielectrics, or insulators. Material costs must be fit for the application.
Electronic functionality and printability can interfere with each other, mandating careful optimization. For example, a higher molecular weight in polymers enhances conductivity, but diminishes solubility. For printing, viscosity, surface tension and solid content must be tightly controlled. Cross-layer interactions such as wetting, adhesion, and solubility as well as post-deposition drying procedures affect the outcome. Additives often used in conventional printing inks are unavailable, because they often defeat electronic functionality.
Material properties largely determine the differences between printed and conventional electronics. Printable materials provide decisive advantages beside printability, such as mechanical flexibility and functional adjustment by chemical modification (e.g. light color in OLEDs).
Printed conductors offer lower conductivity and charge carrier mobility.
With a few exceptions, inorganic ink materials are dispersions of metallic or semiconducting micro- and nano-particles. Semiconducting nanoparticles used include silicon and oxide semiconductors. Silicon is also printed as an organic precursor which is then converted by pyrolisis and annealing into crystalline silicon.
PMOS but not CMOS is possible in printed electronics.
Organic materials
Organic printed electronics integrates knowledge and developments from printing, electronics, chemistry, and materials science, especially from organic and polymer chemistry. Organic materials in part differ from conventional electronics in terms of structure, operation and functionality, which influences device and circuit design and optimization as well as fabrication method.
The discovery of conjugated polymers and their development into soluble materials provided the first organic ink materials. Materials from this class of polymers variously possess conducting, semiconducting, electroluminescent, photovoltaic and other properties. Other polymers are used mostly as insulators and dielectrics.
In most organic materials, hole transport is favored over electron transport. Recent studies indicate that this is a specific feature of organic semiconductor/dielectric-interfaces, which play a major role in OFETs. Therefore, p-type devices should dominate over n-type devices. Durability (resistance to dispersion) and lifetime is less than conventional materials.
Organic semiconductors include the conductive polymers poly(3,4-ethylene dioxitiophene), doped with poly(styrene sulfonate), (PEDOT:PSS) and poly(aniline) (PANI). Both polymers are commercially available in different formulations and have been printed using inkjet, screen and offset printing or screen, flexo and gravure printing, respectively.
Polymer semiconductors are processed using inkjet printing, such as poly(thiopene)s like poly(3-hexylthiophene) (P3HT) and poly(9,9-dioctylfluorene co-bithiophen) (F8T2). The latter material has also been gravure printed. Different electroluminescent polymers are used with inkjet printing, as well as active materials for photovoltaics (e.g. blends of P3HT with fullerene derivatives), which in part also can be deposited using screen printing (e.g. blends of poly(phenylene vinylene) with fullerene derivatives).
Printable organic and inorganic insulators and dielectrics exist, which can be processed with different printing methods.
Inorganic materials
Inorganic electronics provides highly ordered layers and interfaces that organic and polymer materials cannot provide.
Silver nanoparticles are used with flexo, offset and inkjet. Gold particles are used with inkjet.
A.C. electroluminescent (EL) multi-color displays can cover many tens of square meters, or be incorporated in watch faces and instrument displays. They involve six to eight printed inorganic layers, including a copper doped phosphor, on a plastic film substrate.
CIGS cells can be printed directly onto molybdenum coated glass sheets.
A printed gallium arsenide germanium solar cell demonstrated 40.7% conversion efficiency, eight times that of the best organic cells, approaching the best performance of crystalline silicon.
Substrates
Printed electronics allows the use of flexible substrates, which lowers production costs and allows fabrication of mechanically flexible circuits. While inkjet and screen printing typically imprint rigid substrates like glass and silicon, mass-printing methods nearly exclusively use flexible foil and paper. Poly(ethylene terephthalate)-foil (PET) is a common choice, due to its low cost and moderately high temperature stability. Poly(ethylene naphthalate)- (PEN) and poly(imide)-foil (PI) are higher performance, higher cost alternatives. Paper's low costs and manifold applications make it an attractive substrate, however, its high roughness and high wettability have traditionally made it problematic for electronics. This is an active research area, however, and print-compatible metal deposition techniques have been demonstrated that adapt to the rough 3D surface geometry of paper.
Other important substrate criteria are low roughness and suitable wet-ability, which can be tuned pre-treatment by use of coating or Corona discharge. In contrast to conventional printing, high absorbency is usually disadvantageous.
History
Albert Hanson, a German by birth, is credited to have introduced the concept of printed electronics. in 1903 he filled a patent for “Printed Wires,” and thus printed electronics were born. Hanson proposed forming a Printed Circuit Board pattern on copper foil through cutting or stamping. The drawn elements were glued to the dielectric, in this case, paraffined paper. The first printed circuit was produced in 1936 by Paul Eisler, and that process was used for large-scale production of radios by the USA during World War II. Printed circuit technology was released for commercial use in the US in 1948 (Printed Circuits Handbook, 1995). In the over a half-century since its inception, printed electronics has evolved from the production of printed circuit boards (PCBs), through the everyday use of membrane switches, to today's RFID, photovoltaic and electroluminescent technologies. Today it is nearly impossible to look around a modern American household and not see devices that either uses printed electronic components or that are the direct result of printed electronic technologies. Widespread production of printed electronics for household use began in the 1960s when the Printed Circuit Board became the foundation for all consumer electronics. Since then printed electronics have become a cornerstone in many new commercial products.
The biggest trend in recent history when it comes to printed electronics is the widespread use of them in solar cells. In 2011, researchers from MIT created a flexible solar cell by inkjet printing on normal paper. In 2018, researchers at Rice University have developed organic solar cells which can be painted or printed onto surfaces. These solar cells have been shown to max out at fifteen percent efficiency. Konarka Technologies, now a defunct company in the US, was the pioneering company in producing inkjet solar cells. Today there are more than fifty companies across a diverse number of countries that are producing printed solar cells.
While printed electronics have been around since the 1960s, they are predicted to have a major boom in total revenue. As of 2011, the total printed electronic revenue was reported to be at $12.385 (billion). A report by IDTechEx predicts the PE market will reach $330 (billion) in 2027. A big reason for this increase in revenue is because of the incorporation of printed electronic into cellphones. Nokia was one of the companies that pioneered the idea of creating a “Morph” phone using printed electronics. Since then, Apple has implemented this technology into their iPhone XS, XS Max, and XR devices. Printed electronics can be used to make all of the following components of a cellphone: 3D main antenna, GPS antenna, energy storage, 3D interconnections, multi-layer PCB, edge circuits, ITO jumpers, hermetic seals, LED packaging, and tactile feedback.
With the revolutionary discoveries and advantages that printed electronic gives to companies many large companies have made recent investments into this technology. In 2007, Soligie Inc. and Thinfilm Electronics entered into an agreement to combine IPs for soluble memory materials and functional materials printing to develop printed memory in commercial volumes. LG announce significant investment, potentially $8.71 billion in OLEDs on Plastic. Sharp (Foxconn) will invest $570m in pilot line for OLED displays. BOE announce potential $6.8 billion in flexible AMOLED fab. Heliatek has secured €80m in additional funding for OPV manufacturing in Dresden. PragmatIC has raised ~ €20m from investors including Avery Dennison. Thinfilm invests in new production site in Silicon Valley (formerly owned by Qualcomm). Cambrios back in business after acquisition by TPK.
Applications
Printed electronics are in use or under consideration include wireless sensors in packaging, skin patches that communicate with the internet, and buildings that detect leaks to enable preventative maintenance. Most of these applications are still in the prototyping and development stages. There is a particularly growing interest for flexible smart electronic systems, including photovoltaic, sensing and processing devices, driven by the desire to extend and integrate the latest advances in (opto-)electronic technologies into a broad range of low-cost (even disposable) consumer products of our everyday life, and as tools to bring together the digital and physical worlds.
Norwegian company ThinFilm demonstrated roll-to-roll printed organic memory in 2009.
Another company, Rotimpres based in Spain, has successfully introduced applications on different markets as for instance; heaters for smart furniture or to prevent mist and capacitive switch for keyboards on white goods and industrial machines.
Standards development and activities
Technical standards and road-mapping initiatives are intended to facilitate value chain development (for sharing of product specifications, characterization standards, etc.) This strategy of standards development mirrors the approach used by silicon-based electronics over the past 50 years. Initiatives include:
The IEEE Standards Association has published IEEE 1620-2004 and IEEE 1620.1-2006.
Similar to the well-established International Technology Roadmap for Semiconductors (ITRS), the International Electronics Manufacturing Initiative (iNEMI) has published a roadmap for printed and other organic electronics.
IPC—Association Connecting Electronics Industries has published three standards for printed electronics. All three have been published in cooperation with the Japan Electronic Packaging and Circuits Association (JPCA):
IPC/JPCA-4921, Requirements for Printed Electronics Base Materials
IPC/JPCA-4591, Requirements for Printed Electronics Functional Conductive Materials
IPC/JPCA-2291, Design Guideline for Printed Electronics
These standards, and others in development, are part of IPC's Printed Electronics Initiative.
See also
Amorphous silicon
Anilox rolls
Chip tag
Coating and printing processes
Conductive ink
Electronic paper
Flexible battery
Flexible electronics
Laminar electronics
Nanoparticle silicon
Oligomer
Organic electronics
References
Further reading
Printed Organic and Molecular Electronics, edited by D. Gamota, P. Brazis, K. Kalyanasundaram, and J. Zhang (Kluwer Academic Publishers: New York, 2004).
External links
Cleaner Electronics Research Group - Brunel University
Printed Electronics conference/exhibition Asia USA
New Nano Silver Powder Enables Flexible Printed Circuits (Ferro Corporation)
Western Michigan University's Center for Advancement of Printed Electronics (CAPE) includes AccuPress gravure printer
Major Trends in Gravure Printed Electronics June 2010
Printed Electronics – avistando el futuro. Printed Electronics en Español
Organic Solar Cells - Theory and Practice (Coursera)
Electronics manufacturing
Flexible electronics | Printed electronics | Engineering | 4,600 |
2,739,214 | https://en.wikipedia.org/wiki/Balance%20spring | A balance spring, or hairspring, is a spring attached to the balance wheel in mechanical timepieces. It causes the balance wheel to oscillate with a resonant frequency when the timepiece is running, which controls the speed at which the wheels of the timepiece turn, thus the rate of movement of the hands. A regulator lever is often fitted, which can be used to alter the free length of the spring and thereby adjust the rate of the timepiece.
The balance spring is a fine spiral or helical torsion spring used in mechanical watches, alarm clocks, kitchen timers, marine chronometers, and other timekeeping mechanisms to control the rate of oscillation of the balance wheel. The balance spring is an essential adjunct to the balance wheel, causing it to oscillate back and forth. The balance spring and balance wheel together form a harmonic oscillator, which oscillates with a precise period or "beat" resisting external disturbances and is responsible for timekeeping accuracy.
The addition of the balance spring to the balance wheel around 1657 by Robert Hooke and Christiaan Huygens greatly increased the accuracy of portable timepieces, transforming early pocketwatches from expensive novelties to useful timekeepers. Improvements to the balance spring are responsible for further large increases in accuracy since that time. Modern balance springs are made of special low temperature coefficient alloys like nivarox to reduce the effects of temperature changes on the rate, and carefully shaped to minimize the effect of changes in drive force as the mainspring runs down. Before the 1980s, balance wheels and balance springs were used in virtually every portable timekeeping device, but in recent decades electronic quartz timekeeping technology has replaced mechanical clockwork, and the major remaining use of balance springs is in mechanical watches.
History
There is some dispute as to whether it was invented around 1660 by British physicist Robert Hooke or Dutch scientist Christiaan Huygens, with the likelihood being that Hooke first had the idea, but Huygens built the first functioning watch that used a balance spring. Before that time, balance wheels or foliots without springs were used in clocks and watches, but they were very sensitive to fluctuations in the driving force, causing the timepiece to slow down as the mainspring unwound. The introduction of the balance spring effected an enormous increase in the accuracy of pocketwatches, from perhaps several hours per day to 10 minutes per day, making them useful timekeepers for the first time. The first balance springs had only a few turns.
A few early watches had a Barrow regulator, which used a worm drive, but the first widely used regulator was invented by Thomas Tompion around 1680. In the Tompion regulator the curb pins were mounted on a semicircular toothed rack, which was adjusted by fitting a key to a cog and turning it. The modern regulator, a lever pivoted concentrically with the balance wheel, was patented by Joseph Bosley in 1755, but it didn't replace the Tompion regulator until the early 19th century.
Regulator
In order to adjust the rate, the balance spring usually has a regulator. The regulator is a moveable lever mounted on the balance cock or bridge, pivoted coaxially with the balance. A narrow slot is formed on one end of the regulator by two downward projecting pins, called curb pins, or by a curb pin and a pin with a heavier section called a boot. The end of the outer turn of the balance spring is fixed in a stud which is secured to the balance cock. The outer turn of the spring then passes through the regulator slot. The portion of the spring between the stud and the slot is held stationary, so the position of the slot controls the free length of the spring. Moving the regulator slides the slot along the outer turn of the spring, changing its effective length. Moving the slot away from the stud shortens the spring, making it stiffer, increasing the balance's oscillation rate, and making the timepiece gain time.
The regulator interferes slightly with the motion of the spring, causing inaccuracy, so precision timepieces like marine chronometers and some high end watches are free sprung, meaning they don't have a regulator. Instead, their rate is adjusted by timing screws on the balance wheel.
There are two principal types of balance spring regulator:
The Tompion regulator, in which the curb pins are mounted on a sector-rack, moved by a pinion. The pinion is usually fitted with a graduated silver or steel disc.
The Bosley regulator, as described above, in which the Pins are mounted on a lever pivoted coaxially with the Balance, the extremity of the lever being able to be moved over a graduated scale. There are several variants which improve the accuracy with which lever can be moved, including the snail regulator, in which the lever is sprung against a cam of spiral profile which can be turned, the Micrometer, in which the lever is moved by a worm gear, and the swan's neck or reed regulator in which the position of the lever is adjusted by a fine screw, the lever being held in contact with the screw by a spring in the shape of a curved swans neck. This was invented and patented by the American George P. Reed, US patent No. 61,867 dated February 5, 1867.
There is also a hog's hair or pig's bristle regulator, in which stiff fibres are positioned at the extremities of the balance's arc and bring it to a gentle halt before throwing it back. The watch is accelerated by shortening the arc. This is not a balance spring regulator, being used in the earliest watches before the balance spring was invented.
There is also a Barrow regulator, but this is really the earlier of the two principal methods of giving the mainspring "set-up tension"; that required to keep the fusée chain in tension but not enough to actually drive the Watch. Verge watches can be regulated by adjusting the set-up tension, but if any of the previously described regulators is present then this is not usually done.
Material
A number of materials have been used for balance springs. Early on, steel was used, but without any hardening or tempering process applied; as a result, these springs would gradually weaken and the watch would start losing time. Some watchmakers, for example John Arnold, used gold, which avoids the problem of corrosion but retains the problem of gradual weakening. Hardened and tempered steel was first used by John Harrison and subsequently remained the material of choice until the 20th century.
In 1833, E. J. Dent (maker of the Great Clock of the Houses of Parliament) experimented with a glass balance spring. This was much less affected by heat than steel, reducing the compensation required, and also didn't rust. Other trials with glass springs revealed that they were difficult and expensive to make, and they suffered from a widespread perception of fragility, which persisted until the time of fibreglass and fibre-optic materials.
Hairsprings made of etched silicon were introduced in the late 20th century and are not susceptible to magnetisation.
Effect of temperature
The modulus of elasticity of materials is dependent on temperature. For most materials, this temperature coefficient is large enough that variations in temperature significantly affect the timekeeping of a balance wheel and balance spring. The earliest makers of watches with balance springs, such as Hooke and Huygens, observed this effect without finding a solution to it.
Harrison, in the course of his development of the marine chronometer, solved the problem by a "compensation curb" – essentially a bimetallic thermometer which adjusted the effective length of the balance spring as a function of temperature. While this scheme worked well enough to allow Harrison to meet the standards set by the Longitude Act, it was not widely adopted.
Around 1765, Pierre Le Roy (son of Julien Le Roy) invented the compensation balance, which became the standard approach for temperature compensation in watches and chronometers. In this approach, the shape of the balance is altered, or adjusting weights are moved on the spokes or rim of the balance, by a temperature-sensitive mechanism. This changes the moment of inertia of the balance wheel, and the change is adjusted such that it compensates for the change in modulus of elasticity of the balance spring. The compensating balance design of Thomas Earnshaw, which consists simply of a balance wheel with bimetallic rim, became the standard solution for temperature compensation.
Elinvar
While the compensating balance was effective as a way to compensate for the effect of temperature on the balance spring, it could not provide a complete solution. The basic design suffers from "middle temperature error": if the compensation is adjusted to be exact at extremes of temperature, then it will be slightly off at temperatures between those extremes. Various "auxiliary compensation" mechanisms were designed to avoid this, but they all suffer from being complex and hard to adjust.
Around 1900, a fundamentally different solution was created by Charles Édouard Guillaume, inventor of elinvar. This is a nickel-steel alloy with the property that the modulus of elasticity is essentially unaffected by temperature. A watch fitted with an elinvar balance spring requires either no temperature compensation at all, or very little. This simplifies the mechanism, and it also means that middle temperature error is eliminated as well, or at a minimum is drastically reduced.
Isochronism
A balance spring obeys Hooke's Law: the restoring torque is proportional to the angular displacement. When this property is exactly satisfied, the balance spring is said to be isochronous, and the period of oscillation is independent of the amplitude of oscillation. This is an essential property for accurate timekeeping, because no mechanical drive train can provide absolutely constant driving force. This is particularly true in watches and portable clocks which are powered by a mainspring, which provides a diminishing drive force as it unwinds. Another cause of varying driving force is friction, which varies as the lubricating oil ages.
Early watchmakers empirically found approaches to make their balance springs isochronous. For example, Arnold in 1776 patented a helical (cylindrical) form of the balance spring, in which the ends of the spring were coiled inwards. In 1861 M. Phillips published a theoretical treatment of the problem. He demonstrated that a balance spring whose center of gravity coincides with the axis of the balance wheel is isochronous.
In general practice, the most common method of achieving isochronism is through the use of the Breguet overcoil, which places part of the outermost turn of the hairspring in a different plane from the rest of the spring. This allows the hairspring coil to expand and contract more evenly and symmetrically as the balance wheel rotates. Two types of overcoils are found – the gradual overcoil and the Z-Bend. The gradual overcoil is obtained by imposing two gradual twists to the hairspring, forming the rise to the second plane over half the circumference. The Z-bend does this by imposing two kinks of complementary 45 degree angles, accomplishing a rise to the second plane in about three spring section heights. The second method is done for aesthetic reasons and is much more difficult to perform. Due to the difficulty with forming an overcoil, modern watches often use a slightly less effective "dogleg", which uses a series of sharp bends (in plane) to place part of the outermost coil out of the way of the rest of the spring.
Period of oscillation
The balance spring and the balance wheel (which is usually referred to as simply the balance) form a harmonic oscillator. The balance spring provides a restoring torque that limits and reverses the motion of the balance so it oscillates back and forth. Its resonant period makes it resistant to changes from perturbing forces, which is what makes it a good timekeeping device. The stiffness of the spring, its spring coefficient, in N·m/radian^2, along with the balance wheel's moment of inertia, in kg·m2, determines the wheel's oscillation period . The equations of motion for the balance are derived from the angular form of Hooke's law and the angular form of Newton's second law:
is the angular acceleration, . The following differential equation for the motion of the wheel results from rearranging the above equation:
The solution to this equation of motion for the balance is simple harmonic motion; i.e., a sinusoidal motion of constant period:
Thus, the following equation for the periodicity of oscillation can be extracted from the above results:
This period controls the rate of the timepiece.
See also
History of timekeeping devices
References
Timekeeping components
Springs (mechanical) | Balance spring | Technology | 2,659 |
19,989,020 | https://en.wikipedia.org/wiki/Xanthine%20oxidase%20inhibitor | A xanthine oxidase inhibitor is any substance that inhibits the activity of xanthine oxidase, an enzyme involved in purine metabolism. In humans, inhibition of xanthine oxidase reduces the production of uric acid, and several medications that inhibit xanthine oxidase are indicated for treatment of hyperuricemia and related medical conditions including gout. Xanthine oxidase inhibitors are being investigated for management of reperfusion injury.
Xanthine oxidase inhibitors are of two kinds: purine analogues and others. Purine analogues include allopurinol, oxypurinol, and tisopurine. Others include febuxostat, topiroxostat, and inositols (phytic acid and myo-inositol).
In experiments, numerous natural products have been found to inhibit xanthine oxidase in vitro or in model animals (mice, rats). These include three flavonoids that occur in many different fruits and vegetables: kaempferol, myricetin, and quercetin. More generally, planar flavones and flavonols with a 7-hydroxyl group inhibit xanthine oxidase. An essential oil extracted from Cinnamomum osmophloeum inhibits xanthine oxidase in mice. The natural product propolis from selected sources inhibits xanthine oxidase in rats; the specific substance responsible for this inhibition has not been identified, and the generality of these findings is unknown. An extract of leaves of Pistacia integerrima also inhibits xanthine oxidase at a level that appears to merit further research.
In folk medicine the tree fern Cyathea spinulosa (formerly Alsophila spinulosa) has been used for gout, but its most active component, caffeic acid, is only a weak inhibitor of xanthine oxidase.
References
Uric acid | Xanthine oxidase inhibitor | Biology | 413 |
11,127,922 | https://en.wikipedia.org/wiki/Lasiodiplodia%20theobromae | Lasiodiplodia theobromae is a plant pathogen with a very wide host range. It causes rotting and dieback in most species it infects. It is a common post harvest fungus disease of citrus known as stem-end rot. It is a cause of bot canker of grapevine. It also infects Biancaea sappan, a species of flowering tree also known as Sappanwood.
On rare occasions it has been found to cause fungal keratitis, lesions on nail and subcutaneous tissue.
It has been implicated in the widespread mortality of baobab (Adansonia digitata) trees in Southern Africa. A preliminary study found the deaths to have a complex set of causes requiring detailed research.
Host and symptoms
L. theobromae causes diseases such as dieback, blights, and root rot in a variety of different hosts in tropical and subtropical regions. These include guava, coconut, papaya, and grapevine. Botryosphaeria dieback, which is formerly known as bot canker, is characterised by a range of symptoms that affect grapevine in particular. These symptoms affect different areas on the plant and can be used to diagnose this disease along with other factors. In the trunk and cordon of the plant symptoms include cankers coming out of the wounds, wedge shaped lesions when cut in cross sections and dieback. Dieback is characterized as a ‘dead arm’ and a loss of spur positions. More symptoms include stunted shoots in the spring, delay or lack of growth in the spur positions of the bud burst, bleached canes and necrotic buds. Bud necrosis, bud failure, and the dieback of arms are all a result of the necrosis of the host's vascular system.
It can also affect the fruit of durians such as Durio graveolens.
Disease cycle
The fungus over-winters as pycnidia on the outside of diseased wood. The pycnidia produces and releases two-celled, dark brown, striated conidia. The conidia are then dispersed by wind and rain splash, spreading the fungi to other vines, and from one part of the vine to another. Disease develops when conidia land on freshly cut or damaged wood. The conidia germinate the tissue of the wood and start causing damage to the vascular system. Cankers begin to form around the initial infection point and eventually complete damage of the vascular system causes necrosis and dieback of the wood. In some instances, pseudothecia form on the outside of cankers and produce ascospores which are then dispersed like conidia and infect surrounding wounds.
Management
There are many different procedures that can be implemented to manage dieback in a vineyard. These can either be done to prevent further infection by breaking the disease cycle or to recover plants after initial infection. Good hygiene must be practiced when removing infection sources in order to prevent further infection to the rest of the vineyard as well as to avoid cross contamination. Strategies that can be used for prevention and recovery are listed in the table below:
References
External links
USDA ARS Fungal Database
Botryosphaeriaceae
Fungal tree pathogens and diseases
Cacao diseases
Fungal citrus diseases
Grapevine trunk diseases
Fungus species | Lasiodiplodia theobromae | Biology | 666 |
28,732,614 | https://en.wikipedia.org/wiki/SpaceUp | SpaceUp is an open-attendance space exploration unconference, where participants decide the topics, schedule, and structure of the event. SpaceUps have been held on both West and East coasts, and in Houston. Common features of SpaceUps are an unconference/barcamp style schedule, Ignite talks, and a moonpie eating contest.
Topics from SpaceUp conferences include: discussion of newspace industry, how to get more people excited about space, and the future direction of NASA, ESA, CSA, JAXA.
History
The first SpaceUp was held at the San Diego Air & Space Museum on February 27–28, 2010. It was sponsored and organized by the San Diego Space Society. The 2-day event was covered by Spacevidcast and attended by representatives from NASA, Google Lunar X Prize, Masten Space Systems, and Quicklaunch. This event was organized by Chris Radcliff.
The second SpaceUp event was held August 27–28, 2010 in Washington, D.C. at George Washington University. This event was organized by Evadot founder Michael Doornbos.
After several more successful events in the US, the SpaceUp concept crossed the Atlantic in September 2012, with the first SpaceUp unconference in Europe. It was organized by six dedicated space ambassadors at Europlanetarium Genk on 22 and 23 September 2012. SpaceUp Europe triggered several space enthusiasts to organize more European space unconferences, making the concept truly global. SpaceUp Stuttgart in Germany was held 27 October, followed by SpaceUp Poland in Warsaw on 24 and 25 November. SpaceUp London and Paris were held 2013. In December 2012 the concept rolled further East with the first event in Asia, SpaceUp India.
Events
References
External links
http://www.spaceup.org – Original SpaceUp website (for San Diego)
http://wiki.spaceup.org
SPACEUP UNCONFERENCES: A 21ST CENTURY GLOBAL APPROACH TO SPACE OUTREACH, presentation by Andreas Hornig at the 64th International Astronautical Congress 2013 in Beijing (YouTube)
http://www.spaceupdc.org – SpaceUp DC site
http://wiki.spaceupdc.org
http://www.spaceuphouston.org – SpaceUp Houston
http://wiki.spaceuphouston.org
http://spaceup.org/near-you/europe/ – SpaceUp Europe
http://spaceup.org/near-you/europe/
http://spaceupdenver.org – SpaceUp Denver
http://spaceupdenver.org
http://www.spaceup.fr – SpaceUp France: Paris 2013, Toulouse 2014
http://www.spaceup.fr
Unconferences
Space advocacy organizations | SpaceUp | Astronomy | 576 |
2,881,125 | https://en.wikipedia.org/wiki/Chronosophy | Chronosophy is the neologistic designation given by scholar Julius Thomas (J.T.) Fraser to "the interdisciplinary and normative study of time sui generis."
Overview
Etymology
Fraser derived the term from the Ancient Greek: χρόνος, chronos, "time" and σοφία, sophia, "wisdom". Chronosophia is thus defined as the "specific human skill or knowledge . . . pertaining to time . . . [which] all men seem to possess to some degree . . .".
Purpose
Fraser outlined the purpose of the discipline of chronosophy in five intentions, as follows:
to encourage the search for new knowledge related to time;
to set up and apply criteria regarding which fields of knowledge contribute to an understanding of time, and what they may contribute;
to assist in epistemological studies, especially in those related to the structure of knowledge;
to provoke communication between the humanities and the sciences using time as the common theme; and
to help us learn more about the nature of time by providing channels for the direct confrontation of a multitude of views.
Assumptions
According to Fraser, any pursuit of chronosophical knowledge necessarily makes two assumptions:
When specialists speak of 'time', they speak of various aspects of the same entity.
Said entity is amenable to study by the methods of the sciences, can be made a meaningful subject of contemplation by the reflective mind, and can be used as proper material for intuitive interpretation by the creative artist.
Fraser labeled these two assumptions the unity of time. Together they amount to the proposition that all of us, working separately, are nevertheless headed toward the same central idea (i.e., chronosophia).
In contradistinction to the aforementioned, Fraser posits the diversity of time: the existence of time's myriad manifestations, which "hardly needs proof; it is all too apparent."
The continued qualitative and quantitative mediation of the unresolvable conflict between the unity and diversity of time would thus be the sole methodological criterion for measuring chronosophical progress. This conflict manifests itself not so much as between the humanities and the sciences (although this interpretation is cogent and apt), but rather between knowledge felt (i.e., passion) and knowledge understood (i.e., knowledge proper). Fraser envisions the total creativity of a society as being dependent on the effectiveness of "a harmonious dialogue between the two great branches of knowledge." He observes (paraphrasing Giordano Bruno): "The creative activity of the mind consists of the search for the one in the many, for simplicity in variety. There is no better and more fundamental problem than the problem of time in respect to which such [a] search may be conducted. It is always present and always tantalizing, it is the basic material of man's rational and emotive inquiries." Just as a mature individual can reconcile within themselves the unity and diversity of day-to-day noetic existence, so too could a mature social conception of time mediate the difference between—and perhaps ultimately reconcile—the unity and diversity of time.
Organization
Chronosophy defies systematic organization, for—like philosophy—it is a kind of ur-discipline, subsuming all other disciplines through a proposed unifying characteristic: temporality. (Hence, the possibility of producing a branch of knowledge lacking temporal import, e.g. [arguably] ontology and/or metaphysics, remains; however, "atemporality" is still, by definition, a temporal category: a regress ensues.)
Fraser wrote that a successful study of time would "encourage communication across the traditional boundaries of systems of knowledge and seek a framework which . . . may permit interaction of experience and theorizing related to time without regard to the sources of experience and theory." Thus, the only methodological commitment that a chronosopher need make is to interdisciplinarity.
While Fraser neglects to develop a systematic chronosophical methodology in The Voices of Time, he does proffer a selection of idiomatically interdisciplinary categories to spur the research of future scholars:
surveys of historical and current ideas of time in the sciences and in the humanities;
studies of the relation of time to ideas of conceptual extremities such as a) to motion and rest, b) to atomicity and continuity, c) to the spatially very large and very small, and d) to the quantities of singular and many;
comparative analysis of those properties of time that various fields of learning and intuitive expressions designate unproblematically as "the nature of time";
inquiries into the processes and methods whereby man learns to perceive, proceeds to measure, and proposes to reason about time;
exploration of the role of time in the communication of thought and emotion;
search for an understanding of the relation of time to personal identity and death;
research concerning time and organic evolution, time and the psychological development of man, and the role f time in the growth of civilizations; and
determination of the status of chronosophy vis-à-vis the traditional systems of knowledge.
The nature of the above categories would require that chronosophy be regarded as an independent system of experiential, experimental, and theoretical knowledge about time.
General characteristics
In general, chronosophical pursuits are characterized by
expansion beyond or abandonment of [traditional areas of] specialization, and
the espousal of interdisciplinary or pan-disciplinary methodologies;
(1) is a weak criterion, while (2) is a strong one.
Neither of the above criteria make reference to time or temporality; for while the ontological possibly of timeless knowledge must always remain, admission of this possibility begs the question (petitio principii): e.g., what form does timeless knowledge take? how would it come to us? how could we ever be separated from such knowledge to begin with? et cetera ad nauseam. The admission is therefore a paradox (akin to Wittgenstein's seventh proposition in Tractatus Logico-Philosophicus: "Whereof one cannot speak, thereof one must be silent."). We must conclude that the proposal of the possibility of timeless knowledge is both necessary and senseless, a conceptual counterpart to the tautologous nature of the concept of time. Should we come to possess knowledge of that which is "beneath" or "behind" time (or, alternatively, conclude we could never have lost possession of it), there would be no discernible need for further chronosophical inquiry—in the face of such eternal truth, it would instead be chronosophy as currently conceived that would appear both necessary and senseless.
Hence: all disciplines are necessarily chronosophical (until proven otherwise).
Caveat: for the sake of logicality, future manifestations of chronosophy may resemble more closely some methods of knowing than others; however, due to the character of the "problem" of time no chronosophical endeavor could ever be thoroughly purged of its interdisciplinary perspectives: a satisfactory theory of time must necessarily satisfy a wide variety of specifications (i.e., by definition a satisfactory or sufficient chronosophy would accommodate every office of human knowledge as pertains to the subject, time).
Envoi
Why should we afford time this privileged status among our speculative and empirical undertakings?
A Fraserian chronosopher would argue that mediation of the problem(s) of time is essential to the creation and retention of individual and social identity. Hence, as long as we—as individuals and as social groups—continue to partake in the process of clarifying and defining our individual and collective identities over and against those of the world (in whole or in part) around us, the necessarily contemporaneous clarification and definition of the problem(s) of time must, by extension (mutatis mutandis), be universal and continuous.
See also
Time
Temporality
Julius Thomas Fraser
Natural Philosophy
Philosophy of Space and Time
Horology
Cosmology
Eschatology
Ontology
Metaphysics
References
External links
International Society for the Study of Time Homepage
Time
Philosophy of time | Chronosophy | Physics,Mathematics | 1,706 |
41,672,569 | https://en.wikipedia.org/wiki/Alentemol | Alentemol (INN) (developmental code name U-66444B), or alentamol, is a selective dopamine autoreceptor agonist described as an antipsychotic, which was never marketed.
References
Antipsychotics
Dopamine agonists
Abandoned drugs | Alentemol | Chemistry | 66 |
8,676,520 | https://en.wikipedia.org/wiki/Polymethine | Polymethines are compounds made up from an odd number of methine groups (CH) bound together by alternating single and double bonds. Compounds made up from an even number of methine groups are known as polyenes.
Polymethine dyes
Cyanines are synthetic dyes belonging to polymethine group. Anthocyanidins are natural plant pigments belonging to the group of the polymethine dyes.
Polymethines are fluorescent dyes that may be attached to nucleic acid probes for different uses, e.g., to accurately count reticulocytes.
References
Alkenes | Polymethine | Chemistry | 128 |
2,680,508 | https://en.wikipedia.org/wiki/Carminati%E2%80%93McLenaghan%20invariants | In general relativity, the Carminati–McLenaghan invariants or CM scalars are a set of 16 scalar curvature invariants for the Riemann tensor. This set is usually supplemented with at least two additional invariants.
Mathematical definition
The CM invariants consist of 6 real scalars plus 5 complex scalars, making a total of 16 invariants. They are defined in terms of the Weyl tensor and its right (or left) dual , the Ricci tensor , and the trace-free Ricci tensor
In the following, it may be helpful to note that if we regard as a matrix, then is the square of this matrix, so the trace of the square is , and so forth.
The real CM scalars are:
(the trace of the Ricci tensor)
The complex CM scalars are:
The CM scalars have the following degrees:
is linear,
are quadratic,
are cubic,
are quartic,
are quintic.
They can all be expressed directly in terms of the Ricci spinors and Weyl spinors, using Newman–Penrose formalism; see the link below.
Complete sets of invariants
In the case of spherically symmetric spacetimes or planar symmetric spacetimes, it is known that
comprise a complete set of invariants for the Riemann tensor. In the case of vacuum solutions, electrovacuum solutions and perfect fluid solutions, the CM scalars comprise a complete set. Additional invariants may be required for more general spacetimes; determining the exact number (and possible syzygies among the various invariants) is an open problem.
See also
Curvature invariant, for more about curvature invariants in (semi)-Riemannian geometry in general
Curvature invariant (general relativity), for other curvature invariants which are useful in general relativity
References
External links
The GRTensor II website includes a manual with definitions and discussions of the CM scalars.
Implementation in the Maxima computer algebra system
Tensors in general relativity | Carminati–McLenaghan invariants | Physics,Engineering | 405 |
11,512,357 | https://en.wikipedia.org/wiki/Phyllactinia%20angulata | Phyllactinia angulata is a plant pathogen infecting chestnut, beech, oak and elm trees in North America with powdery mildew. Collections on elm were segregated into a new variety, Phyllactinia angulata var. ulmi.
Reports of this species infecting pistachio are probably instead referrable to Phyllactinia pistaciae, described in 2003.
References
Fungal tree pathogens and diseases
Fruit tree diseases
Fungi described in 1933
Erysiphales
Fungus species | Phyllactinia angulata | Biology | 106 |
77,859,236 | https://en.wikipedia.org/wiki/French%20Zoosemiotics%20Society | French Zoosemiotics Society () is an academic society, uniting ethologists, zoologists, semioticians (including biosemioticians and ecosemioticians), linguists, veterinarians and philosophers, and promoting a semiotic approach in zoosemiotics and animal studies.
The focus of the society is to promote and facilitate research in animal communication, their intraspecific and interspecific sign systems, as well as human-animal communication studies.
The Society was established in 2018 by scholars of Sorbonne University, National Museum of Natural History, and other universities and institutions of France.
This is seemingly the first zoosemiotics society in the world.
The founding president of the Society is Astrid Guillaume.
See also
International Society for Biosemiotic Studies
References
External links
The Society’s website
Jane Goodall Institute France.
SfZ youtube channel.
Semiotics
Zoology
Biology organizations
Semiotics organizations | French Zoosemiotics Society | Biology | 195 |
3,684,625 | https://en.wikipedia.org/wiki/History%20of%20architecture | The history of architecture traces the changes in architecture through various traditions, regions, overarching stylistic trends, and dates. The beginnings of all these traditions is thought to be humans satisfying the very basic need of shelter and protection. The term "architecture" generally refers to buildings, but in its essence is much broader, including fields we now consider specialized forms of practice, such as urbanism, civil engineering, naval, military, and landscape architecture.
Trends in architecture were influenced, among other factors, by technological innovations, particularly in the 19th, 20th and 21st centuries. The improvement and/or use of steel, cast iron, tile, reinforced concrete, and glass helped for example Art Nouveau appear and made Beaux Arts more grandiose.
Paleolithic
Humans and their ancestors have been creating various types of shelters for at least hundreds of thousands of years, and shelter-building may have been present early in hominin evolution. All great apes will construct "nests" for sleeping, albeit at different frequencies and degrees of complexity. Chimpanzees regularly make nests out of bundles of branches woven together; these vary depending on the weather (nests have thicker bedding when cool and are built with larger, stronger supports in windy or wet weather). Orangutans currently make the most complex nests out of all non-human great apes, complete with roofs, blankets, pillows, and "bunks".
It has been argued that nest-building practices were crucial to the evolution of human creativity and construction skill moreso than tool use, as hominins became required to build nests not just in uniquely adapted circumstances but as forms of signalling. Retaining arboreal features like highly prehensile hands for the expert construction of nests and shelters would have also benefitted early hominins in unpredictable environments and changing climates. Many hominins, especially the earliest ones such as Ardipithecus and Australopithecus retained such features and may have chosen to build nests in trees where available. The development of a "home base" 2 million years ago may have also fostered the evolution of constructing shelters or protected caches. Regardless of the complexity of nest-building, early hominins may still have still slept in more or less 'open' conditions, unless the opportunity of a rock shelter was afforded. These rock shelters could be used as-is with little more amendments than nests and hearths, or in the case of established bases —especially among later hominins— they could be personalized with rock art (in the case of Lascaux) or other types of aesthetic structures (in the case of the Bruniquel Cave among the Neanderthals) In cases of sleeping in open ground, Dutch ethologist Adriaan Kortlandt once proposed that hominins could have built temporary enclosures of thorny bushes to deter predators, which he supported using tests that showed lions becoming averse to food if near thorny branches.
In 2000, archaeologists at the Meiji University in Tokyo claimed to have found 2 pentagonal alignments of post holes on a hillside near the village of Chichibu, interpreting it as two huts dated around 500,000 years old and built by Homo erectus. Currently, the earliest confirmed purpose-built structures are in France at the site of Terra Amata, along with the earliest evidence of artificial fire, c. 400,000 years ago. Due to the perishable nature of shelters of this time, it is difficult to find evidence for dwellings beyond hearths and the stones that may make up a dwelling's foundation. Near Wadi Halfa, Sudan, the Arkin 8 site contains 100,000 year old circles of sandstone that were likely the anchor stones for tents. In eastern Jordan, post hole markings in the soil give evidence to houses made of poles and thatched brush around 20,000 years ago. In areas where bone — especially mammoth bone — is a viable material, evidence of structures preserve much more easily, such as the mammoth-bone dwellings among the Mal'ta-Buret' culture 24–15,000 years ago and at Mezhirich 15,000 years ago. The Upper Paleolithic in general is characterized by the expansion and cultural growth of anatomically modern humans (as well as the cultural growth of Neanderthals, despite their steady extinction at this time), and although we currently lack data for dwellings built before this time, the dwellings of this era begin to more commonly show signs of aesthetic modification, such as at Mezhirich where engraved mammoth tusks may have formed the "facade" of a dwelling.
10,000–2000 BC
Architectural advances are an important part of the Neolithic period (10,000-2000 BC), during which some of the major innovations of human history occurred. The domestication of plants and animals, for example, led to both new economics and a new relationship between people and the world, an increase in community size and permanence, a massive development of material culture and new social and ritual solutions to enable people to live together in these communities. New styles of individual structures and their combination into settlements provided the buildings required for the new lifestyle and economy, and were also an essential element of change.
Although many dwellings belonging to all prehistoric periods and also some clay models of dwellings have been uncovered enabling the creation of faithful reconstructions, they seldom included elements that may relate them to art. Some exceptions are provided by wall decorations and by finds that equally apply to Neolithic and Chalcolithic rites and art.
In South and Southwest Asia, Neolithic cultures appear soon after 10,000 BC, initially in the Levant (Pre-Pottery Neolithic A and Pre-Pottery Neolithic B) and from there spread eastwards and westwards. There are early Neolithic cultures in Southeast Anatolia, Syria and Iraq by 8000 BC, and food-producing societies first appear in southeast Europe by 7000 BC, and Central Europe by c. 5500 BC (of which the earliest cultural complexes include the Starčevo-Koros (Cris), Linearbandkeramic, and Vinča).
Neolithic settlements and "cities" include:
Göbekli Tepe in Turkey, ca. 9,000 BC
Jericho in Palestine, Neolithic from around 8,350 BC, arising from the earlier Epipaleolithic Natufian culture
Nevali Cori in Turkey, ca. 8,000 BC
Çatalhöyük in Turkey, 7,500 BC
Mehrgarh in Pakistan, 7,000 BC
Herxheim (archaeological site) in Germany, 5,300 BC
Knap of Howar and Skara Brae, the Orkney Islands, Scotland, from 3,500 BC
over 3,000 settlements of the Cucuteni-Trypillian culture, some with populations up to 15,000 residents, flourished in present-day Romania, Moldova and Ukraine from 5,400 to 2,800 BC.
Antiquity
Mesopotamian
Mesopotamia is most noted for its construction of mud-brick buildings and the construction of ziggurats, occupying a prominent place in each city and consisting of an artificial mound, often rising in huge steps, surmounted by a temple. The mound was no doubt to elevate the temple to a commanding position in what was otherwise a flat river valley. The great city of Uruk had a number of religious precincts, containing many temples larger and more ambitious than any buildings previously known.
The word ziggurat is an anglicized form of the Akkadian word ziqqurratum, the name given to the solid stepped towers of mud brick. It derives from the verb zaqaru, ("to be high"). The buildings are described as being like mountains linking Earth and heaven. The Ziggurat of Ur, excavated by Leonard Woolley, is 64 by 46 meters at base and originally some 12 meters in height with three stories. It was built under Ur-Nammu (circa 2100 B.C.) and rebuilt under Nabonidus (555–539 B.C.), when it was increased in height to probably seven stories.
Ancient Egyptian
Modern imaginings of ancient Egypt are heavily influenced by the surviving traces of monumental architecture. Many formal styles and motifs were established at the dawn of the pharaonic state, around 3100 BC. The most iconic Ancient Egyptian buildings are the pyramids, built during the Old and Middle Kingdoms (2600–1800 BC) as tombs for the pharaoh. However, there are also impressive temples, like the Karnak Temple Complex.
The Ancient Egyptians believed in the afterlife. They also believed that in order for their soul (known as ka) to live eternally in their afterlife, their bodies would have to remain intact for eternity. So, they had to create a way to protect the deceased from damage and grave robbers. This way, the mastaba was born. These were adobe structures with flat roofs, which had underground rooms for the coffin, about 30 m down. Imhotep, an ancient Egyptian priest and architect, had to design a tomb for the Pharaoh Djoser. For this, he placed five mastabas, one above the next, this way creating the first Egyptian pyramid, the Pyramid of Djoser at Saqqara (2667–2648 BC), which is a step pyramid. The first smooth-sided one was built by Pharaoh Sneferu, who ruled between 2613 and 2589 BC. The most imposing one is the Great Pyramid of Giza, made for Sneferu's son: Khufu (2589–2566 BC), being the last surviving wonder of the ancient world and the largest pyramid in Egypt. The stone blocks used for pyramids were held together by mortar, and the entire structure was covered with highly polished white limestone, with their tops topped in gold. What we see today is actually the core structure of the pyramid. Inside, narrow passages led to the royal burial chambers. Despite being highly associated with the Ancient Egypt, pyramids have been built by other civilisations too, like the Mayans.
Due to the lack of resources and a shift in power towards priesthood, ancient Egyptians stepped away from pyramids, and temples became the focal point of cult construction. Just like the pyramids, Ancient Egyptian temples were also spectacular and monumental. They evolved from small shrines made of perishable materials to large complexes, and by the New Kingdom (circa 1550–1070 BC) they have become massive stone structures consisting of halls and courtyards. The temple represented a sort of 'cosmos' in stone, a copy of the original mound of creation on which the god could rejuvenate himself and the world. The entrance consisted of a twin gateway (pylon), symbolizing the hills of the horizon. Inside there were columned halls symbolizing a primeval papyrus thicket. It was followed by a series of hallways of decreasing size, until the sanctuary was reached, where a god's cult statue was placed. Back in ancient times, temples were painted in bright colours, mainly red, blue, yellow, green, orange, and white. Because of the desert climate of Egypt, some parts of these painted surfaces were preserved well, especially in interiors.
An architectural element specific to ancient Egyptian architecture is the cavetto cornice (a concave moulding), introduced by the end of the Old Kingdom. It was widely used to accentuate the top of almost every formal pharaonic building. Because of how often it was used, it will later decorate many Egyptian Revival buildings and objects.
Harappan
The first Urban Civilization in the Indian subcontinent is traceable originally to the Indus Valley civilisation mainly in Mohenjodaro and Harappa, now in modern-day Pakistan as well western states of the Republic of India. The earliest settlements are seen during the Neolithic period in Merhgarh, Balochistan. The civilization's cities were noted for their urban planning with baked brick buildings, elaborate drainage and water systems, and handicraft (carnelian products, seal carving). This civilisation transitioned from the Neolithic period into the Chalcolithic period and beyond with their expertise in metallurgy (copper, bronze, lead, and tin). Their urban centres possibly grew to contain between 30,000 and 60,000 individuals, and the civilisation itself may have contained between one and five million individuals.
Greek
Since the advent of the Classical Age in Athens, in the 5th century BC, the Classical way of building has been deeply woven into Western understanding of architecture and, indeed, of civilization itself. From circa 850 BC to circa 300 AD, ancient Greek culture flourished on the Greek mainland, on the Peloponnese, and on the Aegean islands. However, Ancient Greek architecture is best known for its temples, many of which are found throughout the region, and the Parthenon is a prime example of this. Later, they will serve as inspiration for Neoclassical architects during the late 18th and the 19th century. The most well-known temples are the Parthenon and the Erechtheion, both on the Acropolis of Athens. Another type of important Ancient Greek buildings were the theatres. Both temples and theatres used a complex mix of optical illusions and balanced ratios.
Ancient Greek temples usually consist of a base with continuous stairs of a few steps at each edges (known as crepidoma), a cella (or naos) with a cult statue in it, columns, an entablature, and two pediments, one on the front side and another in the back. By the 4th century BC, Greek architects and stonemasons had developed a system of rules for all buildings known as the orders: the Doric, the Ionic, and the Corinthian. They are most easily recognised by their columns (especially by the capitals). The Doric column is stout and basic, the Ionic one is slimmer and has four scrolls (called volutes) at the corners of the capital, and the Corinthian column is just like the Ionic one, but the capital is completely different, being decorated with acanthus leafs and four scrolls. Besides columns, the frieze was different based on order. While the Doric one has metopes and triglyphs with guttae, Ionic and Corinthian friezes consist of one big continuous band with reliefs.
Besides the columns, the temples were highly decorated with sculptures, in the pediments, on the friezes, metopes and triglyphs. Ornaments used by Ancient Greek architects and artists include palmettes, vegetal or wave-like scrolls, lion mascarons (mostly on lateral cornices), dentils, acanthus leafs, bucrania, festoons, egg-and-dart, rais-de-cœur, beads, meanders, and acroteria at the corners of the pediments. Pretty often, ancient Greek ornaments are used continuously, as bands. They will later be used in Etruscan, Roman and in the post-medieval styles that tried to revive Greco-Roman art and architecture, like Renaissance, Baroque, Neoclassical etc.
Looking at the archaeological remains of ancient and medieval buildings it is easy to perceive them as limestone and concrete in a grey taupe tone and make the assumption that ancient buildings were monochromatic. However, architecture was polychromed in much of the Ancient and Medieval world. One of the most iconic Ancient buildings, the Parthenon ( 447–432 BC) in Athens, had details painted with vibrant reds, blues and greens. Besides ancient temples, Medieval cathedrals were never completely white. Most had colored highlights on capitals and columns. This practice of coloring buildings and artworks was abandoned during the early Renaissance. This is because Leonardo da Vinci and other Renaissance artists, including Michelangelo, promoted a color palette inspired by the ancient Greco-Roman ruins, which because of neglect and constant decay during the Middle Ages, became white despite being initially colorful. The pigments used in the ancient world were delicate and especially susceptible to weathering. Without necessary care, the colors exposed to rain, snow, dirt, and other factors, vanished over time, and this way Ancient buildings and artworks became white, like they are today and were during the Renaissance.
Celtic
Celtic architecture, in its broadest sense, refers to the styles and structures associated with the Celtic peoples who once inhabited a large part of Europe, including parts of modern-day France, Germany, the British Isles, and beyond. This architecture is difficult to define strictly because the Celts did not have a unified, standardized architectural style across the different regions they inhabited. However, general characteristics of Celtic architecture are shared, for example, in structures of central Europe like from Germany and France, which provide insights into the material culture and architectural forms of the Celts in these regions.
Many Celtic structures, particularly in the earlier periods, were made of wood, which has not survived as well as stone or other materials.
The Celts often built round houses and settlements, with circular huts (or roundhouses) being the most common residential structures.
The Celts were skilled in creating defensive structures, such as hillforts, which included ditches, ramparts, and palisades.
In later periods, especially during the Iron Age, some Celtic groups began constructing stone buildings, such as temples, shrines, and more permanent dwellings.
One of the most famous Celtic sites is the Heuneburg, located on the Swabian Jura in Germany. Heuneburg was a large Celtic settlement and a key center of power in the late Hallstatt and early La Tène periods. The site is famous for its fortifications, including large earthworks and timber palisades, indicative of the Celtic emphasis on defensive architecture.
The Mont Lassois is another important Celtic archaeological site located in the Burgundy region of eastern France, near the town of Montbard, in the Côte-d'Or department. The site is notable for being one of the largest and most significant Celtic oppida (fortified settlements) of the La Tène period (approximately 450 BCE to 1 BCE). Mont Lassois offers crucial insights into Celtic urban planning, architecture, and the socio-political organization of the Celtic tribes in Gaul just before the Roman conquest.
The Glauberg Celtic hillfort or oppidum in Hesse, Germany, is consisting of a fortified settlement and several burial mounds, "a princely seat of the late Hallstatt and early La Tène periods." Archaeological discoveries in the 1990s place the site among the most important early Celtic centres in Europe. It provides unprecedented evidence on Celtic burial, sculpture and monumental architecture.
Roman
The architecture of ancient Rome has been one of the most influential in the world. Its legacy is evident throughout the medieval and early modern periods, and Roman buildings continue to be reused in the modern era in both New Classical and Postmodern architecture. It was particularly influenced by Greek and Etruscan styles. A range of temple types was developed during the republican years (509–27 BC), modified from Greek and Etruscan prototypes.
Wherever the Roman army conquered, they established towns and cities, spreading their empire and advancing their architectural and engineering achievements. While the most important works are to be found in Italy, Roman builders also found creative outlets in the western and eastern provinces, of which the best examples preserved are in modern-day North Africa, Turkey, Syria and Jordan. Extravagant projects appeared, like the Arch of Septimius Severus in Leptis Magna (present-day Libya, built in 216 AD), with broken pediments on all sides, or the Arch of Caracalla in Thebeste (present-day Algeria, built in 214 AD), with paired columns on all sides, projecting entablatures and medallions with divine busts. Due to the fact that the empire was formed from multiple nations and cultures, some buildings were the product of combining the Roman style with the local tradition. An example is the Palmyra Arch (present-day Syria, built in 212–220), some of its arches being embellished with a repeated band design consisting of four ovals within a circle around a rosette, which are of Eastern origin.
Among the many Roman architectural achievements were domes (which were created for temples), baths, villas, palaces and tombs. The most well known example is the one of the Pantheon in Rome, being the largest surviving Roman dome and having a large oculus at its centre. Another important innovation is the rounded stone arch, used in arcades, aqueducts and other structures. Besides the Greek orders (Doric, Ionic and Corinthian), the Romans invented two more. The Tuscan order was influenced by the Doric, but with un-fluted columns and a simpler entablature with no triglyphs or guttae, while the Composite was a mixed order, combining the volutes of the Ionic order capital with the acanthus leaves of the Corinthian order.
Between 30 and 15 BC, the architect and engineer Marcus Vitruvius Pollio published a major treatise, De architectura, which influenced architects around the world for centuries. As the only treatise on architecture to survive from antiquity, it has been regarded since the Renaissance as the first book on architectural theory, as well as a major source on the canon of classical architecture.
Just like the Greeks, the Romans built amphiteatres too. The largest amphitheatre ever built, the Colosseum in Rome, could hold around 50,000 spectators. Another iconic Roman structure that demonstrates their precision and technological advancement is the Pont du Gard in southern France, the highest surviving Roman aqueduct.
Americas (Pre-Columbian)
From over 3,000 years before the Europeans 'discovered' America, complex societies had already been established across North, Central and South America. The most complex ones were in Mesoamerica, notably the Mayans, the Olmecs and the Aztecs, but also Incas in South America. Structures and buildings were often aligned with astronomical features or with the cardinal directions.
Mesoamerica
Much of the Mesoamerican architecture developed through cultural exchange – for example the Aztecs learnt much from earlier Mayan architecture. Many cultures built entire cities, with monolithic temples and pyramids decoratively carved with animals, gods and kings. Most of these cities had a central plaza with governmental buildings and temples, plus public ball courts, or tlachtli, on raised platforms. Just like in ancient Egypt, here were built pyramids too, being generally stepped. They were probably not used as burial chambers, but had important religious sites at the top. They had few rooms, as interiors mattered less than the ritual presence of these imposing structures and the public ceremonies they hosted; so, platforms, altars, processional stairs, statuary, and carving were all important.
Andes
Inca architecture originated from the Tiwanaku styles, founded in the 2nd century B.C.E.. The Incas used topography and land materials in their designs, with the capital city of Cuzco still containing many examples. The famous Machu Picchu royal estate is a surviving example, along with Sacsayhuamán and Ollantaytambo. The Incas also developed a road system along the western continent, placing their distinctive architecture along the way, visually asserting their imperial rule along the frontier. Other groups such as the Muisca did not construct grand architecture of stone based materials, but rather made of materials like wood and clay.
South Asia
After the fall of the Indus Valley, South Asian architecture entered the Dharmic period which saw the development of Ancient Indian architectural styles which further developed into various unique forms in the Middle Ages, along with the combination of Islamic styles, and later, other global traditions.
Ancient Buddhist
Buddhist architecture developed in the Indian subcontinent during the 4th and 2nd century BC, and spread first to China and then further across Asia. Three types of structures are associated with the religious architecture of early Buddhism: monasteries (viharas), places to venerate relics (stupas), and shrines or prayer halls (chaityas, also called chaitya grihas), which later came to be called temples in some places. The most iconic Buddhist type of building is the stupa, which consists of a domed structure containing relics, used as a place of meditation to commemorate Buddha. The dome symbolised the infinite space of the sky.
Buddhism had a significant influence on Sri Lankan architecture after its introduction, and ancient Sri Lankan architecture was mainly religious, with over 25 styles of Buddhist monasteries. Monasteries were designed using the Manjusri Vasthu Vidya Sastra, which outlines the layout of the structure.
After the fall of the Gupta empire, Buddhism mainly survived in Bengal under the Palas, and has had a significant impact on pre-Islamic Bengali architecture of that period.
Ancient Hindu
Across the Indian subcontinent, Hindu architecture evolved from simple rock-cut cave shrines to monumental temples. From the 4th to 5th centuries AD, Hindu temples were adapted to the worship of different deities and regional beliefs, and by the 6th or 7th centuries larger examples had evolved into towering brick or stone-built structures that symbolise the sacred five-peaked Mount Meru. Influenced by early Buddhist stupas, the architecture was not designed for collective worship, but had areas for worshippers to leave offerings and perform rituals.
Many Indian architectural styles for structures such as temples, statues, homes, markets, gardens and planning are as described in Hindu texts. The architectural guidelines survive in Sanskrit manuscripts and in some cases also in other regional languages. These include the Vastu shastras, Shilpa Shastras, the Brihat Samhita, architectural portions of the Puranas and the Agamas, and regional texts such as the Manasara among others.
Since this architectural style emerged in the classical period, it has had a considerable influence on various medieval architectural styles like that of the Gurjaras, Dravidians, Deccan, Odias, Bengalis, and the Assamese.
Maru Gurjara
This style of North Indian architecture has been observed in Hindu as well as Jain places of worship and congregation. It emerged in the 11th to 13th centuries under the Chaulukya (Solanki) period. It eventually became more popular among the Jain communities who spread it in the greater region and across the world. These structures have the unique features like a large number of projections on external walls with sharply carved statues, and several urushringa spirelets on the main shikhara.
Himalayan
The Himalayas are inhabited by various people groups including the Paharis, Sino-Tibetans, Kashmiris, and many more groups. Being from different religious and ethnic backgrounds, the architecture has also had multiple influences. Considering the logistical difficulties and slower pace of life in the Himalayas, artisans have that the time to make intricate wood carvings and paintings accompanied by ornamental metal work and stone sculptures that are reflected in religious as well as civic and military buildings. These styles exist in different forms from Tibet and Kashmir to Assam and Nagaland. A common feature is observed in the slanted layered roofs on temples, mosques, and civic buildings.
Dravidian
This is an architectural style that emerged in the southern part of the Indian subcontinent and in Sri Lanka. These include Hindu temples with a unique style that involves a shorter pyramidal tower over the garbhagriha or sanctuary called a vimana, where the north has taller towers, usually bending inwards as they rise, called shikharas. These also include secular buildings that may or may not have slanted roofs based on the geographical region. In the Tamil country, this style is influenced by the Sangam period as well as the styles of the great dynasties that ruled it. This style varies in the region to its west in Kerala that is influenced by geographic factors like western trade and the monsoons which result in sloped roofs. Further north, the Karnata Dravida style varies based on the diversity of influences, often relaying much about the artistic trends of the rulers of twelve different dynasties.
Kalinga
The ancient Kalinga region corresponds to the present-day eastern Indian areas of Odisha, West Bengal and northern Andhra Pradesh. Its architecture reached a peak between the 9th and 12th centuries under the patronage of the Somavamsi dynasty of Odisha. Lavishly sculpted with hundreds of figures, Kalinga temples usually feature repeating forms such as horseshoes. Within the protective walls of the temple complex are three main buildings with distinctive curved towers called deul or deula and prayer halls called jagmohan.
East and Southeast Asia
Chinese and Confucian culture has had a significant influence on the art and architecture in the Sinosphere (mainly Vietnam, Korea, Japan).
China and Vietnam
What is recognised today as Chinese culture has its roots in the Neolithic period (10,000–2000 BC), covering the cultural sites of Yangshao, Longshan, and Liangzhu in central China. Sections of present-day north-east China also contain sites of the Neolithic Hongshan culture that manifested aspects of proto-Chinese culture. Native Chinese belief systems included naturalistic, animistic and hero worship. In general, open-air platforms (tan, or altar) were used for worshipping naturalistic deities, such as the gods of wind and earth, whereas formal buildings (miao, or temple) were for heroes and deceased ancestors.
Most early buildings in China were timber structures. Columns with sets of brackets on the face of the buildings, mostly in even numbers, made the central intercolumnal space the largest interior opening. Heavily tiled roofs sat squarely on the timber building with walls constructed in brick or pounded earth.
The transmission of Buddhism into China around the 1st century AD led to a new era of religious practices, and so to new building types. Places of worship in form of cave temples appeared in China, based on Indian rock-cut ones. Another new building type introduced by Buddhism was the Chinese form of stupa (ta) or pagoda. In India, stupas were erected to commemorate well-known people or teachers: consequently, the Buddhist tradition adapted the structure to remember the great teacher, the Buddha. In The Chinese pagoda shared a similar symbolism with the Indian stupa and was built with sponsorship mainly from imperial patrons who hoped to gain earthly merits for the next life. Buddhism reached its peak from the 6th to the 8th centuries when there was an unprecedented number of monasteries thought China. More than 4,600 official and 40,000 unofficial monasteries were built. They varies in size by the number of cloisters they contained, ranging from 6 to 120. Each cloister consisted of a main stand-alone building – a hall, pagoda of pavilion – and was surrounded by a covered corridor in a rectangular compounded served by a gate building.
Japan and Korea
Korean architecture, especially post Joseon period showcases Ming-Qing influences.
Traditionally, Japanese architecture was made of wood and fusuma (sliding doors) in place of walls, allowing internal space to be altered to suit different purposes. The introduction of Buddhism in the mid 6th century, via the neighbouring Korean kingdom of Paekche, initiated large-scale wooden temple building with an emphasis on simplicity, and much of the architecture was imported from China and other Asian cultures. By the end of this century, Japan was constructing Continental-style monasteries, notably the temple, known as Horyu-ji in Ikaruga. In contrast with Western architecture, Japanese structures rarely use stone, except for specific elements such as foundations. Walls are light, thin, never load-bearing and often movable.
Khmer
From the start of the 9th century to the early 15th century, Khmer kings rules over a vad Hindu-Buddhist empire in Southeast Asia. Angkor, in present-day Cambodia, was its capital city, and most of its surviving buildings are east-facing stone temples, many of them constructed in pyramidal, tiered form consisting of five square structures with towers, or prasats, that represent the sacred five-peaked Mount Meru of Hindu, Jain and Buddhist doctrine. As the residences of gods, temples were made of durable materials such as sandstone, brick or laterite, a clay-like substance that dries hard.
Cham architecture in Vietnam also follows a similar style.
Sub-Saharan Africa
Traditional Sub-Saharan African architecture is diverse, varying significantly across regions. Included among traditional house types, are huts, sometimes consisting of one or two rooms, as well as various larger and more complex structures.
West African and Bantu styles
In much of West Africa, rectangular houses with peaked roofs and courtyards, sometimes consisting of several rooms and courtyards, are also traditionally found (sometimes decorated, with adobe reliefs as among the Ashanti of Ghana, or carved pillars as among the Yoruba people of Nigeria, especially in palaces and the dwellings of the wealthy) Besides the regular rectangular type of dwelling with a sharp roof, widespread in West Africa and Madagascar, there also other types of houses: beehive houses made from a circle of stones topped with a domed roof, and the round one, with a cone-shaped roof. The first type, which also existed in America, is characteristic especially for Southern Africa. These were used by Bantu-speaking groups in southern and parts of east Africa, which was made with mud, poles, thatch, and cow dung (rectangular houses were more common among the Bantu-speaking peoples of the greater Congo region and central Africa). The round hut with a cone-shaped roof is widespread especially in Sudan and Eastern Africa, but is also present in Colombia and New Caledonia, as well as in the Western Sudan and Sahel regions of west Africa, where they are sometimes arranged into compounds. A distinct style of traditional wooden architecture exists among the Grassland peoples of Cameroon, such as the Bamileke.
In several West African societies, including the kingdom of Benin (and of other Edo peoples), and the kingdoms of the Yoruba, Hausa, at sites like Jenne-Jeno (a pre-Islamic city in Mali), and elsewhere, towns and cities were surrounded by large walls of mud brick or adobe, and sometimes by monumental moats and earthworks, such as Sungbo's Eredo (in the Nigerian Yoruba kingdom of Ijebu) and the Walls of Benin (of the Nigerian Kingdom of Benin). In medieval southern Africa, a tradition existed of fortified stone settlements such as Great Zimbabwe and Khami.
The famed Benin City of southwest Nigeria (capital of the Kingdom of Benin) destroyed by the Punitive Expedition, was a large complex of homes in coursed clay, with hipped roofs of shingles or palm leaves. The Palace had a sequence of ceremonial rooms, and was decorated with brass plaques. It was surrounded by a monumental complex of earthworks and walls whose construction is thought to have begun by the early Middle Ages.
Sahelian
In the Western Sahel region, Islamic influence was a major contributing factor to architectural development from the later ages of the Kingdom of Ghana. At Kumbi Saleh, locals lived in domed-shaped dwellings in the king's section of the city, surrounded by a great enclosure. Traders lived in stone houses in a section which possessed 12 beautiful mosques, as described by al-bakri, with one centered on Friday prayer. The king is said to have owned several mansions, one of which was sixty-six feet long, forty-two feet wide, contained seven rooms, was two stories high, and had a staircase; with the walls and chambers filled with sculpture and painting.
Sahelian architecture initially grew from the two cities of Djenné and Timbuktu. The Sankore Mosque in Timbuktu, constructed from mud on timber, was similar in style to the Great Mosque of Djenné. The rise of kingdoms in the West African coastal region produced architecture which drew on indigenous traditions, utilizing wood, mud-brick and adobe. Though later acquiring Islamic influences, the style also had roots in local pre-Islamic building styles, such as those found in ancient settlements like Jenne-Jeno, Dia, Mali, and Dhar Tichitt, some of which employed a traditional sahelian style of cylindrical mud brick.
Ethiopian
Ethiopian architecture (including modern-day Eritrea) expanded from the Aksumite style and incorporated new traditions with the expansion of the Ethiopian state. Styles incorporated more wood and rounder structures in domestic architecture in the center of the country and the south, and these stylistic influences were manifested in the construction of churches and monasteries. Throughout the medieval period, Aksumite architecture and influences and its monolithic tradition persisted, with its influence strongest in the early medieval (Late Aksumite) and Zagwe periods (when the rock-cut monolithic churches of Lalibela were carved). Throughout the medieval period, and especially from the 10th to 12th centuries, churches were hewn out of rock throughout Ethiopia, especially during the northernmost region of Tigray, which was the heart of the Aksumite Empire. The most famous example of Ethiopian rock-hewn architecture are the eleven monolithic churches of Lalibela, carved out of the red volcanic tuff found around the town. During the early modern period in Ethiopia, the absorption of new diverse influences such as Baroque, Arab, Turkish and Gujarati style began with the arrival of Portuguese Jesuit missionaries in the 16th and 17th centuries.
Oceania
Most Oceanic buildings consist of huts, made of wood and other vegetal materials. Art and architecture have often been closely connected—for example, storehouses and meetinghouses are often decorated with elaborate carvings—and so they are presented together in this discussion. The architecture of the Pacific Islands was varied and sometimes large in scale. Buildings reflected the structure and preoccupations of the societies that constructed them, with considerable symbolic detail. Technically, most buildings in Oceania were no more than simple assemblages of poles held together with cane lashings; only in the Caroline Islands were complex methods of joining and pegging known. Fakhua shen, Taboa shen and Kuhua shen (the shen triplets) designed the first oceanian architecture.
An important Oceanic archaeological site is Nan Madol from the Federated States of Micronesia. Nan Madol was the ceremonial and political seat of the Saudeleur Dynasty, which united Pohnpei's estimated 25,000 people until about 1628. Set apart between the main island of Pohnpei and Temwen Island, it was a scene of human activity as early as the first or second century AD. By the 8th or 9th century, islet construction had started, with construction of the distinctive megalithic architecture beginning 1180–1200 AD.
Islamic
Due to the extent of the Islamic conquests, Islamic architecture encompasses a wide range of architectural styles from the foundation of Islam (7th century) to the present day. Early Islamic architecture was influenced by Roman, Byzantine, Persian, Mesopotamian architecture and all other lands which the Early Muslim conquests conquered in the 7th and 8th centuries. Further east, it was also influenced by Chinese and Indian architecture as Islam spread to Southeast Asia. This wide and long history has given rise to many local architectural styles, including but not limited to: Umayyad, Abbasid, Persian, Moorish, Fatimid, Mamluk, Ottoman, Indo-Islamic (particularly Mughal), Sino-Islamic and Sahelian architecture.
Some distinctive structures in Islamic architecture are mosques, madrasas, tombs, palaces, baths, and forts. Notable types of Islamic religious architecture include hypostyle mosques, domed mosques and mausoleums, structures with vaulted iwans, and madrasas built around central courtyards. In secular architecture, major examples of preserved historic palaces include the Alhambra and the Topkapi Palace. Islam does not encourage the worship of idols; therefore the architecture tends to be decorated with Arabic calligraphy (including Qur'anic verses or other poetry) and with more abstract motifs such as geometric patterns, muqarnas, and arabesques, as opposed to illustrations of scenes and stories.
European
Medieval
Surviving examples of medieval secular architecture mainly served for defense across various parts of Europe. Castles and fortified walls provide the most notable remaining non-religious examples of medieval architecture. New types of civic, military, as well as religious buildings of new styles begin to pop up in this region during this period.
Byzantine
Byzantine architects built city walls, palaces, hippodromes, bridges, aqueducts, and churches. They built many types of churches, including the basilica (the most widespread type, and the one that reached the greatest development). After the early period, the most common layout was the cross-in-square with five domes, also found in Moscow, Novgorod or Kiev, as well as in Romania, Bulgaria, Serbia, North Macedonia and Albania. Through modifications and adaptations of local inspiration, the Byzantine style will be used as the main source of inspiration for architectural styles in all Eastern Orthodox countries. For example, in Romania, the Brâncovenesc style is highly based on Byzantine architecture, but also has individual Romanian characteristics.
Just as the Parthenon is the most famous building of Ancient Greek architecture, Hagia Sophia remains the iconic church of Orthodox Christianity. In Greek and Roman temples, the exterior was the most important part of the temple, where sacrifices were made; the interior, where the cult statue of the deity to whom the temple was built was kept, often had limited access by the general public. But Christian liturgies are held in the interior of the churches, Byzantine exteriors usually have little if any ornamentation.
Byzantine architecture often featured marble columns, coffered ceilings and sumptuous decoration, including the extensive use of mosaics with golden backgrounds. The building material used by Byzantine architects was no longer marble, which was very appreciated by the Ancient Greeks. They used mostly stone and brick, and also thin alabaster sheets for windows. Mosaics were used to cover brick walls, and any other surface where fresco would not resist. Good examples of mosaics from the proto-Byzantine era are in Hagios Demetrios in Thessaloniki (Greece), the Basilica of Sant'Apollinare Nuovo and the Basilica of San Vitale, both in Ravenna (Italy), and Hagia Sophia in Istanbul.
Armenia
From the very beginning of the formation of feudal relations, the architecture and urban planning of Armenia entered a new stage. The ancient Armenian cities experienced economic decline; only Artashat and Tigranakert retained their importance. The importance of the cities of Dvin and Karin (Erzurum) increased. The construction of the city of Arshakavan by the king of Great Armenia Arshak II was not completely completed. Christianity brought to life a new architecture of religious buildings, which was initially nourished by the traditions of the old, ancient architecture.
Churches of the 4th-5th centuries are mainly basilicas (Kasakh, 4th-5th centuries, Ashtarak, 5th century, Akhts, 4th century, Yeghvard, 5th century). Some basilicas of Armenian architecture belong to the so-called “Western type” of basilica churches. Of these, the most famous are the churches of Tekor (5th century), Yererouk (IV-V centuries), Dvin (470), Tsitsernavank (IV-5 centuries). The three-nave Yereruyk basilica stands on a 6-step stylobate, presumably built on the site of an earlier pre-Christian temple. The basilicas of Karnut (5th century), Yeghvard (5th century), Garni (IV century), Zovuni (5th century), Tsaghkavank (VI century), Dvina (553–557), Tallinn (5th century) have also been preserved c., Tanaat (491), Jarjaris (IV-V centuries), Lernakert (IV-V centuries), etc.
Carolingian and Ottonian
Carolingian architecture refers to the style of the Carolingian Empire, particularly under Charlemagne 768–814 and his successors. It is considered a revival of Roman architectural forms, blending the classical heritage of the Roman Empire with new Christian ideals.
Churches followed the Roman basilica plan, with a long, rectangular nave, aisles, and an apse. Charlemagne’s Palatine Chapel at Aachen is a prime example, with its octagonal shape influenced by early Christian and Byzantine architecture.
Vaulting
Carolingian architects used barrel vaults and groin vaults, inspired by Roman engineering, to create large, stable roofs. The Palatine Chapel in Aachen (792–797) is known for its ribbed vaulting.
Columns, arches, and entablatures were borrowed from Roman architecture. Churches were designed to express the divine order, reflecting the Carolingian Empire's Christian imperial ideals.
Ottonian architecture evolved during the reign of the Ottonian dynasty (c. 919–1024 AD). This style was marked by both the continuation of Carolingian forms and the integration of new Byzantine and Romanesque elements.
Ottonian churches often retained a basilica plan but expanded it with double aisles or additional chapels.
The westwork—a monumental, fortress-like façade—became a characteristic feature of Ottonian churches. The Church of Saint Cyriakus at Gernrode features an iconic westwork with towers and a large entrance. The first church towers developed out of westworks.
The Ottonians advanced vaulting techniques and used crypts more extensively. Magdeburg Cathedral (c. 1200) was one of the key buildings of this period, symbolizing imperial power and Christian devotion.
Ottonian architecture was known for its elaborate mosaics, frescoes, and sculptures that incorporated both Byzantine and local traditions. Manuscripts from the period also show the richness of Ottonian visual culture.
Ottonian rulers built grand palaces, continuing the Carolingian legacy of the Aachen Palace, but with added sophistication. The Imperial Palace of Goslar and other imperial buildings reinforced the emperor’s authority.
Romanesque
The term 'Romanesque' is rooted in the 19th century, when it was coined to describe medieval churches built from the 10th to 12th century, before the rise of steeply pointed arches, flying buttresses and other Gothic elements. This style of architecture emerged nearly simultaneously in multiple countries (France, Germany, Italy, Spain). For 19th-century critics, the Romanesque reflected the architecture of stonemasons who evidently admired the heavy barrel vaults and intricate carved capitals of the ancient Romans, but whose own architecture was considered derivative and degenerate, lacking the sophistication of their classical models.
Scholars in the 21st century are less inclined to understand the architecture of this period as a 'failure' to reproduce the achievements of the past, and are far more likely to recognise its profusion of experimental forms, as a series of creative new inventions. At the time, however, research has questioned the value of Romanesque as a stylistic term. On the surface, it provides a convenient designation for buildings that share a common vocabulary of rounded arches and thick stone masonry, and appear in between the Carolingian revival of classical antiquity in the 9th century and the swift evolution of Gothic architecture after the second half of the 12th century. One problem, however, is that the term encompasses a broad array of regional variations, some with closer links to Rome than others. It should also be noted that the distinction between Romanesque architecture and its immediate predecessors and followers is not at all clear. There is little evidence that medieval viewers were concerned with the stylistic distinctions that we observe today, making the slow evolution of medieval architecture difficult to separate into neat chronological categories. Nevertheless, Romanesque remains a useful word despite its limitations, because it reflects a period of intensive building activity that maintained some continuity with the classical past, but freely reinterpreted ancient forms in a new distinctive manner.
Romanesque cathedrals can be easily differentiated from Gothic and Byzantine ones, since they are characterized by the wide use of thick piers and columns, round arches and severity. Here, the possibilities of the round-arch arcade in both a structural and a spatial sense were once again exploited to the full. Unlike the sharp pointed arch of the later Gothic, the Romanesque round arch required the support of massive piers and columns. In comparison to Byzantine churches, Romanesque ones tend to lack complex ornamentation both on the exterior and interior. An example of this is the Périgueux Cathedral (Périgueux, France), built in the early 12th century and designed on the model of St. Mark's Basilica in Venice, but lacking mosaics, leaving its interior very austere and minimalistic.
Gothic
Gothic architecture began with a series of experiments, which were conducted to fulfil specific requests by patrons and to accommodate the ever-growing number of pilgrims visiting sites that housed precious relics. Pilgrims in the high Middle Ages (circa 1000 to 1250 AD) increasingly travelled to well-known pilgrimage sites, but also to local sites where local and national saints were reputed to have performed miracles. The churches and monasteries housing important relics therefore wanted to heighten the popularity of their respective saints and build appropriate shrines for them. These shrines were not merely gem-encrusted reliquaries, but more importantly took the form of powerful architectural settings characterised by coloured light emitting from the large areas of stained glass. The use of stained glass, however, is not the only defining element of Gothic architecture and neither are the pointed arch, the ribbed vault, the rose window or the flying buttress, as many of these elements were used in one way or another in preceding architectural traditions. It was rather the combination and constant refinement of these elements, along with the quick response to the rapidly changing building techniques of the time, that fuelled the Gothic movement in architecture.
Consequently, it is difficult to point to one element or the exact place where Gothic first emerged; however, it is traditional to initiate a discussion of Gothic architecture with the Basilica of St Denis (circa 1135–1344) and its patrons, Abbot Suger, who began to rebuild the west front and the choir of the church. As he wrote in his De Administratione, the old building could no longer accommodate the large volumes of pilgrims who were coming to venerate the relics of St Denis, and the solution for this twofold: a west façade with three large portals and the innovative new choir, which combined an ambulatory with radiating chapels that were unique as they were not separated by walls. Instead a row of slim columns was inserted between the chapels and the choir arcade to support the rib vaults. The result enabled visitors to circulate around the altar and come within reach of the relics without actually disrupting the altar space, while also experiencing the large stained-glass windows within the chapels. As confirmed by Suger, the desire for more stained-glass was not necessarily to bring more daylight into the building but rather to fill the space with a continuous ray of colorful light, rather like mosaics or precious stones, which would make the wall vanish. The demand for ever more stained-glass windows and the search for techniques that would support them are constant throughout the development of Gothic architecture, as is evident in the writings of Suger, who was fascinated by the mystical quality of such lighting.
Brick Gothic was a specific style of Gothic architecture common in Northeast and Central Europe especially in the regions in and around the Baltic Sea, which do not have resources of standing rock. The buildings are essentially built using bricks.
Renaissance
During the Renaissance, Italy consisted of many states, and intense rivalry between them generated an increase in technical and artistic developments. The Medici Family, an Italian banking family and political dynasty, is famous for its financial support of Renaissance art and architecture.
The period began in around 1452, when the architect and humanist Leon Battista Alberti (1404–1472) completed his treatise De Re Aedificatoria (On the Art of Building) after studying the ancient ruins of Rome and Vitruvius's De Architectura. His writings covered numerous subjects, including history, town planning, engineering, sacred geometry, humanism and philosophies of beauty, and set out the key elements of architecture and its ideal proportions. In the last decades of the 15th century, artists and architects began to visit Rome to study the ruins, especially the Colosseum and the Pantheon. They left behind precious records of their studies in the form of drawings. While humanist interest in Rome had been building up over more than a century (dating back at least to Petrarch in the 14th century), antiquarian considerations of monuments had focused on literary, epigraphic and historical information rather than on the physical remains. Although some artists and architects, such as Filippo Brunelleschi (1377–1446), Donatello (circa 1386–1466) and Leon Battista Alberti, are reported to have made studies of Roman sculpture and ruins, almost no direct evidence of this work survives. By the 1480s, prominent architects, such as Francesco di Giorgio (1439–1502) and Giuliano da Sangallo (circa 1445–1516), were making numerous studies of ancient monuments, undertaken in ways that demonstrated that the process of transforming the model into a new design had already begun. In many cases, drawing ruins in their fragmentary state necessitated a leap of imagination, as Francesco himself readily admitted in his annotation to his reconstruction of the Campidoglio, noting 'largely imagined by me, since very little can be understood from the ruins.
Soon, grand buildings were constructed in Florence using the new style, like the Pazzi Chapel (1441–1478) or the Palazzo Pitti (1458–1464). The Renaissance begun in Italy, but slowly spread to other parts of Europe, with varying interpretations.
Since Renaissance art is an attempt of reviving Ancient Rome's culture, it uses pretty much the same ornaments as the Ancient Greek and Roman. However, because most if not all resources that Renaissance artists had were Roman, Renaissance architecture and applied arts widely use certain motifs and ornaments that are specific to Ancient Rome. The most iconic one is the margent, a vertical arrangement of flowers, leaves or hanging vines, used at pilasters. Another ornament associated with the Renaissance is the round medallion, containing a profile of a person, similar with Ancient cameos. Renaissance, Baroque, Rococo, and other post-medieval styles use putti (chubby baby angels) much more often compared to Greco-Roman art and architecture. An ornament reintroduced during the Renaissance, that was of Ancient Roman descent, that will also be used in later styles, is the cartouche, an oval or oblong design with a slightly convex surface, typically edged with ornamental scrollwork.
Worldwide
Baroque
The Baroque emerged from the Counter Reformation as an attempt by the Catholic Church in Rome to convey its power and to emphasize the magnificence of God. The Baroque and its late variant the Rococo were the first truly global styles in the arts. Dominating more than two centuries of art and architecture in Europe, Latin America and beyond from circa 1580 to circa 1800. Born in the painting studios of Bologna and Rome in the 1580s and 1590s, and in Roman sculptural and architectural ateliers in the second and third decades of the 17th century, the Baroque spread swiftly throughout Italy, Spain and Portugal, Flanders, France, the Netherlands, England, Scandinavia, and Russia, as well as to central and eastern European centres from Munich (Germany) to Vilnius (Lithuania). The Portuguese, Spanish and French empires and the Dutch treading network had a leading role in spreading the two styles into the Americas and colonial Africa and Asia, to places such as Lima, Mozambique, Goa and the Philippines. Due to its spread in regions with different architectural traditions, multiple kinds of Baroque appeared based on location, different in some aspects, but similar overall. For example, French Baroque appeared severe and detached by comparison, preempting Neoclassicism and the architecture of the Age of Enlightenment. Hybrid Native American/European Baroque architecture first appeared in South America (as opposed to Mexico) in the late 17th century, after the indigenous symbols and styles that characterize this unusual variant of Baroque had been kept alive over the preceding century in other media, a very good example of this being the Jesuit Church in Arequipa (Peru).
The first Baroque buildings were cathedrals, churches and monasteries, soon joined by civic buildings, mansions, and palaces. Being characterized by dynamism, for the first time walls, façades and interiors curved, a good example being San Carlo alle Quattro Fontane in Rome. Baroque architects took the basic elements of Renaissance architecture, including domes and colonnades, and made them higher, grander, more decorated, and more dramatic. The interior effects were often achieved with the use of quadratura, or trompe-l'œil painting combined with sculpture: the eye is drawn upward, giving the illusion that one is looking into the heavens. Clusters of sculpted angels and painted figures crowd the ceiling. Light was also used for dramatic effect; it streamed down from cupolas and was reflected from an abundance of gilding. Solomonic columns were often used, to give an illusion of upwards motion and other decorative elements occupied every available space. In Baroque palaces, grand stairways became a central element. Besides architecture, Baroque painting and sculpture are characterized by dynamism too. This is in contrast with how static and peaceful Renaissance art is.
Besides the building itself, the space where it was placed had a role too. Both Baroque and Rococo buildings try to seize viewers' attention and to dominate their surroundings, whether on a small scale such as the San Carlo alle Quattro Fontane in Rome, or on a massive one, like the new facade of the Santiago de Compostela Cathedral, designed to tower over the city. A manifestation of power and authority on the grandest scale, Baroque urban planning and renewal was promoted by the church and the state alike. It was the first era since antiquity to experience mass migration into cities, and urban planners took idealistic measures to regulate them. The most notable early example was Domenico Fontana's restructuring of Rome's street plan of Pope Sixtus V. Architects had experimented with idealized city schemes since the early Renaissance, examples being Leon Battista Alberti (1404–1472) planning a centralized model city, with streets leading to a central piazza, or Filarete (Antonio di Pietro Aver(u)lino, -) designing a round city named Sforzinda (1451–1456) that he based on parts of the human body in the idea that a healthy city should reflect the physiognomy of its inhabitants. However, none of these idealistic cities has ever been built. In fact, few such projects were put into practice in Europe as new cities were prohibitively costly and existing urban areas, with existing churches and palaces, could not be demolished. Only in the Americas, where architects often had a clean space to work with, were such cities possible, as in Lima (Peru) or Buenos Aires (Argentina). The earliest Baroque ideal city is Zamość, built north-east of Kraków (Poland) by the Italian architect Bernardo Morando (-1600), being a centralized town focusing on a square with radiating streets. Where entire cities could not be rebuilt, patrons and architects compensated by creating spacious and symmetrical squares, often with avenues and radiating out at perpendicular angles and focusing on a fountain, statue or obelisk. A good example of this is the Place des Vosges (formerly Place Royale), commissioned by Henry IV probably after plans by Baptiste du Cerceau (1545–1590). The most famous Baroque space in the world is Gianlorenzo Bernini's St. Peter's Square in Rome. Similar with ideal urban planning, Baroque gardens are characterized by straight and readapting avenues, with geometric spaces.
Rococo
The name Rococo derives from the French word rocaille, which describes shell-covered rock-work, and coquille, meaning seashell. Rococo architecture is fancy and fluid, accentuating asymmetry, with an abundant use of curves, scrolls, gilding and ornaments. The style enjoyed great popularity with the ruling elite of Europe during the first half of the 18th century. It developed in France out of a new fashion in interior decoration, and spread across Europe. Domestic Rococo abandoned Baroque's high moral tone, its weighty allegories and its obsession with legitimacy: in fact, its abstract forms and carefree, pastoral subjects related more to notions of refuge and joy that created a more forgiving atmosphere for polite conversations. Rococo rooms are typically smaller than their Baroque counterparts, reflecting a movement towards domestic intimacy. Even the grander salons used for entertaining were more modest in scale, as social events involved smaller numbers of guests.
Characteristic of the style were Rocaille motifs derived from the shells, icicles and rock-work or grotto decoration. Rocaille arabesques were mostly abstract forms, laid out symmetrically over and around architectural frames. A favourite motif was the scallop shell, whose top scrolls echoed the basic S and C framework scrolls of the arabesques and whose sinuous ridges echoed the general curvilinearity of the room decoration. While few Rococo exteriors were built in France, a number of Rococo churches are found in southern Germany. Other widely-user motifs in decorative arts and interior architecture include: acanthus and other leaves, birds, bouquets of flowers, fruits, elements associated with love (putti, quivers with arrows ans arrowed hearts) trophies of arms, putti, medallions with faces, many many flowers, and Far Eastern elements (pagodes, dragons, monkeys, bizarre flowers, bamboo, and Chinese people). Pastel colours were widely used, like light blue, mint green or pink. Rococo designers also loved mirrors (the more the better), an example being the Hall of Mirrors of the Amalienburg (Munich, Germany), by Johann Baptist Zimmermann. Generally, mirrors are also featured above fireplaces.
Exoticism
The interactions between East and West brought on by colonialist exploration have had an impact on aesthetics. Because of being something rare and new to Westerners, some non-European styles were really appreciated during the 17th, 18th and 19th centuries. Some nobles and kings built little structures inspired by these styles in the gardens of their palaces, or fully decorated a handful of rooms of palaces like this. Because of not fully understanding the origins and principles that govern these exotic aesthetics, Europeans sometimes created hybrids of the style which they tried to replicate and which were the trends at that time. A good example of this is chinoiserie, a Western decorative style, popular during the 18th century, that was heavily inspired by Chinese arts, but also by Rococo at the same time. Because traveling to China or other Far Eastern countries was something hard at that time and so remained mysterious to most Westerners, European imagination were fuelled by perceptions of Asia as a place of wealth and luxury, and consequently patrons from emperors to merchants vied with each other in adorning their living quarters with Asian goods and decorating them in Asian styles. Where Asian objects were hard to obtain, European craftsmen and painters stepped up to fill the demand, creating a blend of Rococo forms and Asian figures, motifs and techniques.
Chinese art was not the only foreign style with which Europeans experimented. Another was the Islamic one. Examples of this include the Garden Mosque of the Schwetzingen Palace in Germany (the only surviving example of an 18th-century European garden mosque), the Royal Pavilion in Brighton, or the Moorish Revival buildings from the 19th and early 20th centuries, with horseshoe arches and brick patterns. When it come to the Orient, Europeans also had an interest for the culture of Ancient Egypt. Compared to other cases of exoticism, the one with the land of pharaohs is the oldest one, since Ancient Greeks and Romans had this interest during Antiquity. The main periods when Egyptian Revival monuments were erected were the early 19th century, with Napoleon's military campaigns in Egypt, and the 1920s, when the Tomb of Tutankhamun was discovered in 1922, which caused an Egyptomania that lead to Art Deco sometimes using motifs inspired by Ancient Egypt. During the late 18th and early 19th century, Neoclassicism sometimes mixed Greco-Roman elements with Egyptian ones. Because of its association with pharaohs, death and eternity, multiple Egyptian Revival tombs or cemetery entry gates were built in this style. Besides mortuary structures, other buildings in this style include certain synagogues, like the Karlsruhe Synagogue or some Empire monuments built during the reign of Nepoleon, such as the Egyptian portico of the Hôtel Beauharnais or the Fontaine du Fellah. During the 1920s and 1930s, Pre-Columbian Mesoamerican architecture was of great interest for some American architects, particularly what the Mayans built. Several of Frank Lloyd Wright's California houses were erected in a Mayan Revival style, while other architects combined Mayan motifs with Art Deco ones.
Neoclassicism
Neoclassical architecture focused on Ancient Greek and Roman details, plain, white walls and grandeur of scale. Compared to the previous styles, Baroque and Rococo, Neoclassical exteriors tended to be more minimalist, featuring straight and angular lines, but being still ornamented. The style's clean lines and sense of balance and proportion worked well for grand buildings (such as the Panthéon in Paris) and for smaller structures alike (such as the Petit Trianon).
Excavations during the 18th century at Pompeii and Herculaneum, which had both been buried under volcanic ash during the 79 AD eruption of Mount Vesuvius, inspired a return to order and rationality, largely thanks to the writings of Johann Joachim Winckelmann. In the mid-18th century, antiquity was upheld as a standard for architecture as never before. Neoclassicism was a fundamental investigation of the very bases of architectural form and meaning. In the 1750s, an alliance between archaeological exploration and architectural theory started, which will continue in the 19th century. Marc-Antoine Laugier wrote in 1753 that 'Architecture owes all that is perfect to the Greeks'.
The style was adopted by progressive circles in other countries such as Sweden and Russia. Federal-style architecture is the name for the classicizing architecture built in North America between c. 1780 and 1830, and particularly from 1785 to 1815. This style shares its name with its era, the Federal Period. The term is also used in association with furniture design in the United States of the same time period. The style broadly corresponds to the middle-class classicism of Biedermeier style in the German-speaking lands, Regency style in Britain and to the French Empire style. In Central and Eastern Europe, the style is usually referred to as Classicism (, ), while the newer Revival styles of the 19th century until today are called neoclassical.
Étienne-Louis Boullée (1728–1799) was a visionary architect of the period. His utopian projects, never built, included a monument to Isaac Newton (1784) in the form of an immense dome, with an oculus allowing the light to enter, giving the impression of a sky full of stars. His project for an enlargement of the Royal Library (1785) was even more dramatic, with a gigantic arch sheltering the collection of books. While none of his projects were ever built, the images were widely published and inspired architects of the period to look outside the traditional forms.
Similarly with the Renaissance and Baroque periods, during the Neoclassical one urban theories of how a good city should be appeared too. Enlightenment writers of the 18th century decried the problems of Paris at that time, the biggest one being the big number of narrow medieval streets crowded with modest houses. Voltaire openly criticized the failure of the French Royal administration to initiate public works, improve the quality of life in towns, and stimulate the economy. 'It is time for those who rule the most opulent capital in Europe to make it the most comfortable and the most magnificent of cities. There must be public markets, fountains which actually provide water and regular pavements. The narrow and infected streets must be widened, monuments that cannot be seen must be revealed and new ones built for all to see', Voltaire insisted in a polemical essay on 'The Embellishments of Paris' in 1749. In the same year, Étienne La Font de Saint-Yenne, criticized how Louis XIV's great east façade of the Louvre, was all but hidden from views by a dense quarter of modest houses. Voltaire also said that in order to transform Paris into a city that could rival ancient Rome, it was necessary to demolish more than it was to build. 'Our towns are still what they were, a mass of houses crowded together haphazardly without system, planning or design', Marc-Antoine Laugier complained in 1753. Writing a decade later, Pierre Patte promoted an urban reform in quest of health, social order, and security, launching at the same time a medical and organic metaphor which compared the operations of urban design to those of the surgeons. With bad air and lack of fresh water its current state was pathological, Patte asserted, calling for fountains to be placed at principal intersections and markets. Squares are recommended promote the circulation of air, and for the same reason houses on the city's bridges should be demolished. He also criticized the location of hospitals next to markets and protested continued burials in overcrowded city churchyards. Besides cities, new ideas of how a garden should be appeared in 18th century England, making place for the English landscape garden (aka jardin à l'anglaise), characterized by an idealized view of nature, and the use of Greco-Roman or Gothic ruins, bridges, and other picturesque architecture, designed to recreate an idyllic pastoral landscape. It was the opposite of the symmetrical and geometrically planned Baroque garden (aka jardin à la française).
Revivalism and Eclecticism
The 19th century was dominated by a wide variety of stylistic revivals, variations, and interpretations. Revivalism in architecture is the use of visual styles that consciously echo the style of a previous architectural era. Modern-day Revival styles can be summarized within New Classical architecture, and sometimes under the umbrella term traditional architecture.
The idea that architecture might represent the glory of kingdoms can be traced to the dawn of civilisation, but the notion that architecture can bear the stamp of national character is a modern idea, that appeared in the 18th century historical thinking and given political currency in the wake of the French Revolution. As the map of Europe was repeatedly changing, architecture was used to grant the aura of a glorious past to even the most recent nations. In addition to the credo of universal Classicism, two new, and often contradictory, attitudes on historical styles existed in the early 19th century. Pluralism promoted the simultaneous use of the expanded range of style, while Revivalism held that a single historical model was appropriate for modern architecture. Associations between styles and building types appeared, for example: Egyptian for prisons, Gothic for churches, or Renaissance Revival for banks and exchanges. These choices were the result of other associations: the pharaohs with death and eternity, the Middle Ages with Christianity, or the Medici family with the rise of banking and modern commerce.
Whether their choice was Classical, medieval, or Renaissance, all revivalists shared the strategy of advocating a particular style based on national history, one of the great enterprises of historians in the early 19th century. Only one historic period was claimed to be the only one capable of providing models grounded in national traditions, institutions, or values. Issues of style became matters of state.
The most well-known Revivalist style is the Gothic Revival one, that appeared in the mid-18th century in the houses of a number of wealthy antiquarians in England, a notable example being the Strawberry Hill House. German Romantic writers and architects were the first to promote Gothic as a powerful expression of national character, and in turn use it as a symbol of national identity in territories still divided. Johann Gottfried Herder posed the question 'Why should we always imitate foreigners, as if we were Greeks or Romans?'.
In art and architecture history, the term Orientalism refers to the works of the Western artists who specialized in Oriental subjects, produced from their travels in Western Asia, during the 19th century. In that time, artists and scholars were described as Orientalists, especially in France.
In India, during the British Raj, a new style, Indo-Saracenic, (also known as Indo-Gothic, Mughal-Gothic, Neo-Mughal, or Hindoo style) was getting developed, which incorporated varying degrees of Indian elements into the Western European style. The Churches and convents of Goa are another example of the blending of traditional Indian styles with western European architectural styles. Most Indo-Saracenic public buildings were constructed between 1858 and 1947, with the peaking at 1880. The style has been described as "part of a 19th-century movement to project themselves as the natural successors of the Mughals". They were often built for modern functions such as transport stations, government offices, and law courts. It is much more evident in British power centres in the subcontinent like Mumbai, Chennai, and Kolkata.
Beaux-Arts
The Beaux-Arts style takes its name from the École des Beaux-Arts in Paris, where it developed and where many of the main exponents of the style studied. Due to the fact that international students studied here, there are buildings from the second half of the 19th century and the early 20th century of this type all over the world, designed by architects like Charles Girault, Thomas Hastings, Ion D. Berindey or Petre Antonescu. Today, from Bucharest to Buenos Aires and from San Francisco to Brussels, the Beaux-Arts style survives in opera houses, civic structures, university campuses commemorative monuments, luxury hotels and townhouses. The style was heavily influenced by the Paris Opéra House (1860–1875), designed by Charles Garnier, the masterpiece of the 19th century renovation of Paris, dominating its entire neighbourhood and continuing to astonish visitors with its majestic staircase and reception halls. The Opéra was an aesthetic and societal turning point in French architecture. Here, Garnier showed what he called a style actuel, which was influenced by the spirit of the time, aka Zeitgeist, and reflected the designer's personal taste.
Beaux-Arts façades were usually imbricated, or layered with overlapping classical elements or sculpture. Often façades consisted of a high rusticated basement level, after it a few floors high level, usually decorated with pilasters or columns, and at the top an attic level and/or the roof. Beaux-Arts architects were often commissioned to design monumental civic buildings symbolic of the self-confidence of the town or city. The style aimed for a Baroque opulence through lavishly decorated monumental structures that evoked Louis XIV's Versailles. However, it was not just a revival of the Baroque, being more of a synthesis of Classicist styles, like Renaissance, Baroque, Rococo, Neoclassicism etc.
Industry and new technologies
Because of the Industrial Revolution and the new technologies it brought, new types of buildings appeared. By 1850 iron was quite present in daily life at every scale, from mass-produced decorative architectural details and objects of apartment buildings and commercial buildings to train sheds. A well-known 19th century glass and iron building is the Crystal Palace from Hyde Park (London), built in 1851 to house the Great Exhibition, having an appearance similar to a greenhouse. Its scale was daunting.
The marketplace pioneered novel uses of iron and glass to create an architecture of display and consumption that made the temporary display of the world fairs a permanent feature of modern urban life. Just after a year after the Crystal Palace was dismantaled, Aristide Boucicaut opened what historians of mass consumption have labelled the first department store, Le Bon Marché in Paris. As the store expanded, its exterior took on the form of a public monument, being highly decorated with French Renaissance Revival motifs. The entrances advanced subtly onto the pavemenet, hoping to captivate the attention of potential customers. Between 1872 and 1874, the interior was remodelled by Louis-Charles Boileau, in collaboration with the young engineering firm of Gustave Eiffel. In place of the open courtyard required to permit more daylight into the interior, the new building focused around three skylight atria.
Art Nouveau
Popular in many countries from the early 1890s until the outbreak of World War I in 1914, Art Nouveau was an influential although relatively brief art and design movement and philosophy. Despite being a short-lived fashion, it paved the way for the modern architecture of the 20th century. Between 1870 and 1900, a crisis of historicism occurred, during which the historicist culture was critiqued, one of the voices being Friedrich Nietzsche in 1874, who diagnosed 'a malignant historical fervour' as one of the crippling symptoms of a modern culture burdened by archaeological study and faith in the laws of historical progression.
Focusing on natural forms, asymmetry, sinuous lines and whiplash curves, architects and designers aimed to escape the excessively ornamental styles and historical replications, popular during the 19th century. However, the style was not completely new, since Art Nouveau artists drew on a huge range of influences, particularly Beaux-Arts architecture, the Arts and Crafts movement, aestheticism and Japanese art. Buildings used materials associated in the 19th century with modernity, such as cast-iron and glass. A good example of this is the Paris Metro entrance at Porte Dauphine by Hector Guimard (1900). Its cast-iron and glass canopy is as much sculpture as it is architecture. In Paris, Art Nouveau was even called Le Style Métro by some. The interest for stylized organic forms of ornamentation originated in the mid 19th century, when it was promoted in The Grammar of Ornament (1854), a pattern book by British architect Owen Jones (architect) (1809–1874).
Whiplash curves and sinuous organic lines are its most familiar hallmarks, however the style can not be summarized only to them, since its forms are much more varied and complex. The movement displayed many national interpretations. Depending on where it manifested, it was inspired by Celtic art, Gothic Revival, Rococo Revival, and Baroque Revival. In Hungary, Romania and Poland, for example, Art Nouveau incorporated folkloric elements. This is true especially in Romania, because it facilitated the appearance of the Romanian Revival style, which draws inspiration from Brâncovenesc architecture and traditional peasant houses and objects. The style also had different names, depending on countries. In Britain it was known as Modern Style, in the Netherlands as Nieuwe Kunst, in Germany and Austria as Jugendstil, in Italy as Liberty style, in Romania as Arta 1900, and in Japan as Shiro-Uma. It would be wrong to credit any particular place as the only one where the movement appeared, since it seems to have arisen in multiple locations.
Modern
Rejecting ornament and embracing minimalism and modern materials, Modernist architecture appeared across the world in the early 20th century. Art Nouveau paved the way for it, promoting the idea of non-historicist styles. It developed initially in Europe, focusing on functionalism and the avoidance of decoration. Modernism reached its peak during the 1930s and 1940s with the Bauhaus and the International Style, both characterised by asymmetry, flat roofs, large ribbon windows, metal, glass, white rendering and open-plan interiors.
Art Deco
Art Deco, named retrospectively after an exhibition held in Paris in 1925, originated in France as a luxurious, highly decorated style. It then spread quickly throughout the world - most dramatically in the United States - becoming more streamlined and modernistic through the 1930s. The style was pervasive and popular, finding its way into the design of everything from jewellery to film sets, from the interiors of ordinary homes to cinemas, luxury streamliners and hotels. Its exuberance and fantasy captured the spirit of the 'roaring 20s' and provided an escape from the realities of the Great Depression during the 1930s.
Although it ended with the start of World War II, its appeal has endured. Despite that it is an example of modern architecture, elements of the style drew on ancient Egyptian, Greek, Roman, African, Aztec and Japanese influences, but also on Futurism, Cubism and the Bauhaus. Bold colours were often applied on low-reliefs. Predominant materials include chrome plating, brass, polished steel and aluminium, inlaid wood, stone and stained glass.
International Style
The International Style emerged in Europe after World War I, influenced by recent movements, including De Stijl and Streamline Moderne, and had a close relationship to the Bauhaus. The antithesis of nearly every other architectural movement that preceded it, the International Style eliminated extraneous ornament and used modern industrial materials such as steel, glass, reinforced concrete and chrome plating. Rectilinear, flat-roofed, asymmetrical and white, it became a symbol of modernity across the world. It seemed to offer a crisp, clean, rational future after the horrors of war. Named by the architect Philip Johnson and historian Henry-Russell Hitchcock (1903–1987) in 1932, the movement was epitomized by Charles-Edouard Jeanneret, or Le Corbusier and was clearly expressed in his statement that 'a house is a machine for living in'.
Brutalist
Based on social equality, Brutalism was inspired by Le Corbusier's 1947-1952 Unité d'habitation in Marseilles. It seems the term was originally coined by Swedish architect Hans Asplund (1921–1994), but Le Corbusier's use of the description béton brut, meaning raw concrete, for his choice of material for the Unité d'habitation was particularly influential. The style flourished from the 1950s to the mid-1970s, mainly using concrete, which although new in itself, was unconventional when exposed on facades. Before Brutalism, concrete was usually hidden beneath other materials.
Postmodern
Not one definable style, Postmodernism is an eclectic mix of approaches that appeared in the late 20th century in reaction against Modernism, which was increasingly perceived as monotonous and conservative. As with many movements, a complete antithesis to Modernism developed. In 1966, the architect Robert Venturi (1925–2018) had published his book, Complexity and Contradiction in Architecture, which praised the originality and creativity of Mannerist and Baroque architecture of Rome, and encouraged more ambiguity and complexity in contemporary design. Complaining about the austerity and tedium of so many smooth steel and glass Modernist buildings, and in deliberate denunciation of the famous Modernist 'Less is more', Venturi stated 'Less is a bore'. His theories became a majore influence on the development of Postmodernism.
Deconstructivist
Deconstructivism in architecture is a development of postmodern architecture that began in the late 1980s. It is characterized by ideas of fragmentation, non-linear processes of design, an interest in manipulating ideas of a structure's surface or skin, and apparent non-Euclidean geometry, (i.e., non-rectilinear shapes) which serve to distort and dislocate some of the elements of architecture, such as structure and envelope. The finished visual appearance of buildings that exhibit the many deconstructivist "styles" is characterised by a stimulating unpredictability and a controlled chaos.
Important events in the history of the Deconstructivist movement include the 1982 Parc de la Villette architectural design competition (especially the entry from the French philosopher Jacques Derrida and the American architect Peter Eisenman and Bernard Tschumi's winning entry), the Museum of Modern Art's 1988 Deconstructivist Architecture exhibition in New York, organized by Philip Johnson and Mark Wigley, and the 1989 opening of the Wexner Center for the Arts in Columbus, designed by Peter Eisenman. The New York exhibition featured works by Frank Gehry, Daniel Libeskind, Rem Koolhaas, Peter Eisenman, Zaha Hadid, Coop Himmelblau, and Bernard Tschumi. Since the exhibition, many of the architects who were associated with Deconstructivism have distanced themselves from the term. Nonetheless, the term has stuck and has now, in fact, come to embrace a general trend within contemporary architecture.
Contemporary architecture
See also
History of art
Outline of architecture
Timeline of architecture
Timeline of architectural styles
History of architectural engineering
Notes
References
Modernism
Further reading
Sir Banister Fletcher's a History of Architecture Fletcher, Banister; Cruickshank, Dan, Architectural Press, 20th edition, 1996.
External links
The Society of Architectural Historians web site
The Society of Architectural Historians of Great Britain web site
The Society of Architectural Historians, Australia and New Zealand web site
European Architectural History Network web site
Western Architecture Timeline
Extensive collection of source documents in the history, theory and criticism of 20th-century architecture
Architectural design
Architecture | History of architecture | Engineering | 17,473 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.