id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
56,633,182 | https://en.wikipedia.org/wiki/C18orf63 | Chromosome 18 open reading frame 63 is a protein which in humans is encoded by the C18orf63 gene. This protein is not yet well understood by the scientific community. Research has been conducted suggesting that C18orf63 could be a potential biomarker for early stage pancreatic cancer and breast cancer.
Gene
This gene is located at band 22, sub-band 3, on the long arm of chromosome 18. It is composed of 5065 base pairs spanning from 74,315,875 to 74,359,187 bp on chromosome 18. The gene has a total of 14 exons. C18orf63 is also known by the alias DKFZP78G0119. No isoforms exist for this gene.
Expression
C18orf63 has high expression in the testis. The gene shows low expression in the kidneys, liver, lung, and pelvis. There is no phenotype associated with this gene.
Promoter
The promoter region for C18orf63 is 1163 bp long starting at 74,314,813 bp and ending at 74,315,975 bp. The promoter ID is GXP_4417391. The presence of multiple y-box binding transcription factors and SRY transcription factor binding sites suggest that C18orf63 is involved in male sex determination.
Protein
The C18orf63 protein is composed up of 685 amino acids and has a molecular weight of 77230.50 Da, with a predicted isoelectric point of 9.83. No isoforms exist for this protein. This protein is rich in glutamine, isoleucine, lysine, and serine when compared to the average protein, but lacks in aspartic acid and glycine.
Structure
In the predicted secondary structure for this protein there are a number of beta turns, beta strands and alpha helices. For C18orf63 48.6% of the protein is expected to form alpha helices and 28.6% of the structure is expected to be composed of beta strands.
Domains and Motifs
The protein contains one domain of unknown function, DUF 4709, spanning from the 7th amino acid to the 280th amino acid. Motifs that are predicted to exist include an N-terminal motif, RxxL motif, and KEN conserving motif, which all signal for protein degradation. Another motif that is predicted to exist is a Wxxx motif, which facilitates entrance of PTS1 cargo proteins into the organellar lumen, and a RVxPx motif which allows protein transport from the trans-Golgi network to the plasma membrane of the cilia. There is also a bipartite nuclear localization signal at the end of the protein sequence. There is no trans-membrane domain present, indicating that C18orf63 is not a trans-membrane protein.
Post-Translational Modifications
Post-translational modifications the protein is predicted to undergo include SUMOylation, PKC and CK2 phosphorylation, N-glycosylation, amiditation, and cleavage. There are six total PKC phosphorylation sites and 2 CK2 phosphorylation sites, 2 SUMOylation sites, and 2 N-glycosylation sites. There are no signal peptides present in this sequence.
Subcellular Location
Due to the nuclear localization signal at the end of the protein sequence, C18orf63 is predicted to be nuclear. C18orf63 has also been predicted to be targeted to the mitochondria in addition to the nucleus.
Homology
Orthologs
Orthologs have been found in most eukaryotes, with the exception of the class Amphibia. No human paralogs exist for C18orf63. The most distant homolog detectable is Mizuhopecten yessoensis, sharing a 37% identity with the human protein sequence. The domain of unknown function was the only homologous domain present in the protein sequence, it was found to be highly conserved in all orthologs. The table below shows some examples of various orthologs for this protein.
Rate of Evolution
C18orf63 is a mildly slow evolving protein. The protein evolves faster than Cytochorme C but slower than Betaglobin.
Interacting proteins
Transcription factors of interest predicted to bind to the regulatory sequence include p53 tumor suppressors, SRY testis determining factors, Y-box binding transcription factors, and glucocorticoid responsive elements. The JUN protein was found to interact with C18orf63 through anti-bait co-immunoprecipitation. The JUN protein binds to the USP28 promoter in colorectal cancer cells and is involved in the activation of these cancer cells.
Clinical significance
Mutations
A variety of missense mutations occur in the human population for this protein. In the regulatory sequence missense mutations occur at two transcription factor binding sites. Transcription factors affected are glucocorticoid responsive elements and E2F-myc cell cycle regulars. There are eleven common mutations that occur that affect the protein sequence itself. None of these mutations affect predicted post-translational modifications the protein sequence undergoes.
Disease association
C18orf63 has been associated with personality disorders, obesity, and type two diabetes through a genome-wide association study. Currently research has not shown if C18orf63 plays a direct role in any of these diseases.
References
Chromosomes
Proteins | C18orf63 | Chemistry | 1,133 |
1,659,660 | https://en.wikipedia.org/wiki/And%20yet%20it%20moves | "And yet it moves" or "Although it does move" ( or ) is a phrase attributed to the Italian mathematician, physicist, and philosopher Galileo Galilei (1564–1642) in 1633 after being forced to recant his claims that the Earth moves around the Sun, rather than the converse. In this context, the implication of the phrase is: despite his recantation, the Inquisition's proclamations to the contrary, or any other conviction or doctrine of men, the Earth does, in fact, move (around the Sun, and not vice versa).
History
According to Stephen Hawking, some historians believe this episode might have happened upon Galileo's transfer from house arrest under the watch of Archbishop Ascanio Piccolomini to "another home, in the hills above Florence". This other home was also his own, the Villa Il Gioiello, in Arcetri.
The earliest biography of Galileo, written by his disciple Vincenzo Viviani in 1655–1656, does not mention this phrase, and records of his trial do not cite it. Some authors say it would have been imprudent for Galileo to have said such a thing before the Inquisition.
The event was first reported in English print in 1757 by Giuseppe Baretti in his book The Italian Library:
The book became widely published in Querelles Littéraires in 1761.
In 1911, the words E pur si muove were found on a painting which had just been acquired by an art collector, Jules van Belle, of Roeselare, Belgium. This painting is dated 1643 or 1645 (the last digit is partially obscured), within a year or two of Galileo's death. The signature is unclear but van Belle attributed it to the seventeenth century Spanish painter Bartolomé Esteban Murillo. The painting would seem to show that some variant of the Eppur si muove anecdote was in circulation immediately after his death, when many who had known him were still alive to attest to it, and that it had been circulating for over a century before it was published. However, this painting, whose whereabouts is currently unknown, was discovered to be nearly identical to one painted in 1837 by Eugene van Maldeghem, and, basing their opinions on the style, many art experts doubt that the van Belle painting was painted by Murillo, or even that it was painted before the nineteenth century.
United States Supreme Court Justice Antonin Scalia was said to give an "E pur si muove" award to district court judges whose opinions were overturned by appellate courts but later vindicated by the Supreme Court.
References
Italian words and phrases
Galileo Galilei
Copernican Revolution
Dissent
Galileo affair
Politics of science
Astronomical controversies
Quotations from science
17th-century neologisms
17th-century quotations | And yet it moves | Astronomy | 569 |
20,726,543 | https://en.wikipedia.org/wiki/KCNT2 | Potassium channel subfamily T, member 2, also known as KCNT2 is a human gene that encodes the KNa protein. KCNT2, also known as the Slick channel (sequence like an intermediate calcium channel) is an outwardly rectifying potassium channel activated by internal raises in sodium or chloride ions.
See also
SK channel
Voltage-gated potassium channel
References
Further reading
Ion channels | KCNT2 | Chemistry | 80 |
299,847 | https://en.wikipedia.org/wiki/Structural%20information%20theory | Structural information theory (SIT) is a theory about human perception and in particular about visual perceptual organization, which is a neuro-cognitive process. It has been applied to a wide range of research topics, mostly in visual form perception but also in, for instance, visual ergonomics, data visualization, and music perception.
SIT began as a quantitative model of visual pattern classification. Nowadays, it includes quantitative models of symmetry perception and amodal completion, and is theoretically sustained by a perceptually adequate formalization of visual regularity, a quantitative account of viewpoint dependencies, and a powerful form of neurocomputation. SIT has been argued to be the best defined and most successful extension of Gestalt ideas. It is the only Gestalt approach providing a formal calculus that generates plausible perceptual interpretations.
The simplicity principle
A simplest code is a code with minimum information load, that is, a code that enables a reconstruction of the stimulus using a minimum number of descriptive parameters. Such a code is obtained by capturing a maximum amount of visual regularity and yields a hierarchical organization of the stimulus in terms of wholes and parts.
The assumption that the visual system prefers simplest interpretations is called the simplicity principle. Historically, the simplicity principle is an information-theoretical translation of the Gestalt law of Prägnanz, which was inspired by the natural tendency of physical systems to settle into relatively stable states defined by a minimum of free-energy. Furthermore, just as the later-proposed minimum description length principle in algorithmic information theory (AIT), a.k.a. the theory of Kolmogorov complexity, it can be seen as a formalization of Occam's Razor, according to which the simplest interpretation of data is the best one.
Simplicity versus likelihood
Crucial to the latter finding is the distinction between, and integration of, viewpoint-independent and viewpoint-dependent factors in vision, as proposed in SIT's empirically successful model of amodal completion. In the Bayesian framework, these factors correspond to prior probabilities and conditional probabilities, respectively. In SIT's model, however, both factors are quantified in terms of complexities, that is, complexities of objects and of their spatial relationships, respectively.
Modeling principles
In SIT's formal coding model, candidate interpretations of a stimulus are represented by symbol strings, in which identical symbols refer to identical perceptual primitives (e.g., blobs or edges). Every substring of such a string represents a spatially contiguous part of an interpretation, so that the entire string can be read as a reconstruction recipe for the interpretation and, thereby, for the stimulus. These strings then are encoded (i.e., they are searched for visual regularities) to find the interpretation with the simplest code.
This encoding is performed by way of symbol manipulation, which, in psychology, has led to critical statements of the sort of "SIT assumes that the brain performs symbol manipulation". Such statements, however, fall in the same category as statements such as "physics assumes that nature applies formulas such as Einstein's E=mc2 or Newton's F=ma" and "DST models assume that dynamic systems apply differential equations".
Visual regularity
To obtain simplest codes, SIT applies coding rules that capture the kinds of regularity called iteration, symmetry, and alternation. These have been shown to be the only regularities that satisfy the formal criteria of
(a) being holographic regularities that (b) allow for hierarchically transparent codes.
A crucial difference with respect to the traditionally considered transformational formalization of visual regularity is that, holographically, mirror symmetry is composed of many relationships between symmetry pairs rather than one relationship between symmetry halves. Whereas the transformational characterization may be suited better for object recognition, the holographic characterization seems more consistent with the buildup of mental representations in object perception.
The perceptual relevance of the criteria of holography and transparency has been verified in the holographic approach to visual regularity. It also explains that the detectability of mirror symmetries and Glass pattens in the presence of noise follows a psychophysical law that improves on Weber's law.
See also
Neural processing for individual categories of objects
Principles of grouping
Theory of indispensable attributes
Simplicity theory
References
Perception
Information theory
Cognitive neuroscience | Structural information theory | Mathematics,Technology,Engineering | 900 |
33,650,766 | https://en.wikipedia.org/wiki/George%20Tchobanoglous | George Tchobanoglous (born May 24, 1935) is an American civil and environmental engineer, writer and professor.
Biography
George Tchobanoglous was born in the United States to Greek immigrant parents. He received a BS in civil engineering from the University of the Pacific, an MS in sanitary engineering from the University of California, Berkeley, and a PhD in environmental engineering from Stanford University. Rolf Eliassen was his PhD adviser at Stanford.
He joined the faculty of the University of California, Davis, in 1970 and remained there for the rest of his professional career, teaching courses on water and wastewater treatment and solid waste management. He has said "the University has basically been my life." He is now a professor emeritus in the university's Department of Civil and Environmental Engineering.
He is a former president of the Association of Environmental Engineering and Science Professors.
Tchobanoglous and his wife, Rosemary Ash Tchobanoglous, are the parents of three daughters.
Research
Tchobanoglous' principal research interests are in the general areas of wastewater treatment, water reuse, and solid waste management.
In the 1970s he studied the use of constructed wetlands for wastewater treatment. One application of his findings was in his assistance to the city of San Diego in establishing an aquaculture facility that remained in use for more than 20 years. For the California Department of Transportation (Caltrans) he directed the development of a series of four regional guidance manuals on the restoration, rehabilitation, and creation of salt marshes.
Tchobanoglous' investigations of filtration technologies for wastewater treatment led to California state approval of five alternative technologies in addition to the two conventional technologies that had been approved as of the early 1970s.
His work in the use of ultraviolet radiation for wastewater disinfection began in the early 1990s, when he investigated the potential to use UV to disinfect wastewater in order to reclaim it for reuse. Guidelines for UV disinfection that he helped to draft in 1993 became "the standard" U.S. resource on this topic and helped foster the acceptance of UV disinfection as a technology for water reuse.
In the 2000s, Tchobanoglous focused his attention on decentralized wastewater management, delivering numerous speeches on the challenge of providing effective systems to collect, treat, and reuse or disperse wastewater produced in locations where it is not practicable to provide sanitary sewers and centralized wastewater treatment.
Students
Tchobanoglous has advised many of the students that have become thought leaders on their own, including:
Harold Leverenz, (PhD 2008) - currently a researcher at UC Davis
David Austin, (MS 1996) - currently with Jacobs Engineering Group
Dave Maciolek, (MS 1995) - currently with Aqua Nova Engineering
Publications
Tchobanoglous is the author or coauthor of over 600 publications, including 27 textbooks, and 8 engineering reference books. His textbooks are in use in more than 225 U.S. educational institutions.
He also provided editorial consulting for the book series Water Resources and Environmental Engineering, from McGraw-Hill and provides national and international consulting for governments and private companies.
Presentations
He has given more than 625 technical presentations, with more than 450 as an invited or keynote speaker, in the United States and abroad, including Africa, Asia, Europe, the Middle East, and South America. During the last 10 years most of the presentations have been invited lectures or Keynote Addresses.
Awards
Tchobanoglous received the Athalie Richardson Irvine Clarke Prize from National Water Research Institute in 2003. In 2004, he was awarded the Distinguished Service Award for Research and Education in Integrated Waste Management from the Waste-to-energy Research and Technology Council and was inducted into the National Academy of Engineering. In 2005, he was awarded an honorary doctorate by the Colorado School of Mines. In 2006, he was the Distinguished Lecturer for the Department of Civil, Architectural and Environmental Engineering, University of Texas, Austin, Texas. In 2007, he received the Frederick George Pohland Medal awarded by American Academy of Environmental Engineers and the Association of Environmental Engineering and Science Professors. In 2017, he received Honorary Doctor of Engineering Degrees from the Technical University of Crete, Greece and Aristotle University of Thessaloniki, Greece. In 2019, he was elected Corresponding Member of the Academy of Athens.
George and Rosemary Tchobanoglous Fellowship
In 1999, the University of California, Davis, established an endowed fellowship for graduate students in environmental engineering, named for George and Rosemary Tchobanoglous. The fellowship is awarded to students working toward master's degrees without an expectation of candidacy for a doctorate. Tchobanoglous has supported the fellowship through donations. The focus on master's degree students is based on Tchobanoglous' concern that these students often lack access to the financial resources that are available to PhD students who can obtain funding by writing their own research proposals.
Selected books
Sole author
Wastewater management. Gale Research Co., 1976
Solid wastes. McGraw-Hill, 1977
Solutions manual to accompany Metcalf & Eddy, Inc. McGraw-Hill, 1979
Wastewater engineering, treatment, disposal, reuse Solutions manual to accompany Metcalf & Eddy, Inc. McGraw-Hill, 1979
Wastewater engineering, treatment, disposal, reuse. McGraw-Hill, 1979
Water quality. Addison-Wesley, 1985
Wastewater Engineering. Mcgraw-Hill College, November 1990
Integrated solid waste management. McGraw-Hill, 1993
With others
Wastewater Engineering Treatment Disposal Reuse by George Tchobanoglous and Metcalf & Eddy. McGraw-Hill Companies, 1991
Wastewater Engineering by George Tchobanoglous and H. David Stensel. McGraw-Hill Science/Engineering/Math, 2002
Handbook of Solid Waste Management by George Tchobanoglous and Frank Kreith. McGraw-Hill Professional, 2002
Water Reuse by George Tchobanoglous, Metcalf & Eddy, Inc. an AECOM Company, Takashi Asano, Franklin L. Burton, Harold L. Leverenz and Ryujiro Tsuchihashi. McGraw-Hill Professional, 2007
References
American civil engineers
Engineering academics
Environmental scientists
Stanford University alumni
University of California, Berkeley alumni
University of California, Davis faculty
University of the Pacific (United States) alumni
1935 births
Living people
American writers of Greek descent
Members of the United States National Academy of Engineering
American male writers | George Tchobanoglous | Environmental_science | 1,287 |
16,484,868 | https://en.wikipedia.org/wiki/Monospecific%20antibody | Monospecific antibodies are antibodies whose specificity to antigens is singular (mono- + specific) in any of several ways: antibodies that all have affinity for the same antigen; antibodies that are specific to one antigen or one epitope; or antibodies specific to one type of cell or tissue. Monoclonal antibodies are monospecific, but monospecific antibodies may also be produced by other means than producing them from a common germ cell. Regarding antibodies, monospecific and monovalent overlap in meaning; both can indicate specificity to one antigen, one epitope, or one cell type (including one microorganism species). However, antibodies that are monospecific to a certain tissue, or all monospecific to the same tissue because clones, can be polyvalent in their epitope binding.
Production
Hybridoma cell
Monoclonal antibodies are typically made by fusing the spleen cells from a mouse that has been immunized with the desired antigen with myeloma cells. However, recent advances have allowed the use of rabbit B-cells.
PrEST
Another way of producing monospecific antibodies are by PrESTs. A PrEST (protein epitope signature tag) is a type of recombinantly produced human protein fragment. They are inserted into an animal, e.g. rabbit, which produces antibodies against the fragment. These antibodies are monospecific against the human protein.
Cautions
Recent research has led to the discovery that unstable hinged monospecific antibodies may engage in a process leading to a decrease in their apparent avidity/affinity. This process, termed Fab arm exchange, has led to theories about the dissemination of viral infections in patients given monospecific IgG4 therapeutic antibodies. Evidence is suggestive that this process is linked to the dissemination of PML in patients given Tysabri for MS. Following dosing unpredictability still reigns and mutations in the hinge of the antibody which may prevent Fab-arm exchange in-vivo should be considered when designing therapeutic antibodies.
References
See also
Monoclonal antibodies
Antibodies
Immunology | Monospecific antibody | Biology | 438 |
8,335,967 | https://en.wikipedia.org/wiki/Urban%20wild | An urban wild is a remnant of a natural ecosystem found in the midst of an otherwise highly developed urban area.
One of the most expansive efforts to protect and foster urban wilds is the aptly titled "Urban Wilds program" conducted in Boston, which had its start in 1977 off the back of a 1976 report by the Boston Planning & Development Agency (BPDA), formerly the Boston Redevelopment Authority (BRA).
Utility
Urban wilds, particularly those of several acres or more, are often intact ecological systems that can provide essential ecosystem functions such as the filtering of urban run-off, the storing and slowing the flow of stormwater, amelioration of the warming effect of urban development, and generally benefiting local air quality.
Typically, urban wilds are home to native vegetation and animal life as well as some introduced species. Urban wilds are vital to species of migratory birds that have nested in a given area since prior to its urbanization.
Preservation
Without formal protection, urban wilds are vulnerable to development. However, achieving formal protection of a large urban wild can be difficult. Land tenure of a single ecological area can be complex, with multiple public and private entities owning adjacent properties.
Key strategies used in the preservation of urban wilds have included conservation restrictions that keep complex land tenure systems in place while protecting the entire landscape. Public/private partnerships have also been successful in protecting urban wilds.
The urban wilds prioritized by municipalities tend to be partial wetlands that perform a range of ecological services while contributing to the biological diversity of the region.
Passive parks
There is some discussion about whether natural areas that are not at an appropriate scale to perform significant ecosystem services should instead be categorized as passive parks as opposed to urban wilds. Smaller urban wilds are used for passive recreation and have less value to the city in terms of enhancing ecosystem function.
Notes
References
Urban planning
Parks
Ecology | Urban wild | Engineering,Biology | 380 |
564,661 | https://en.wikipedia.org/wiki/Video%20game%20producer | A video game producer is the top person in charge of overseeing development of a video game.
History
The earliest documented use of the term producer in games was by Trip Hawkins, who established the position when he founded Electronic Arts in 1982:
Sierra On-Line's 1982 computer game Time Zone may be the first to list credits for "Producer" and "Executive Producer". As of late 1983 Electronic Arts had five producers: A product marketer and two others from Hawkins' former employer Apple ("good at working with engineering people"), one former IBM salesman and executive recruiter, and one product marketer from Automated Simulations; it popularized the use of the title in the industry. Hawkins' vision—influenced by his relationship with Jerry Moss—was that producers would manage artists and repertoire in the same way as in the music business, and Hawkins brought in record producers from A&M Records to help train those first producers. Activision made Brad Fregger their first producer in April 1983.
Although the term is an industry standard today, it was dismissed as "imitation Hollywood" by many game executives and press members at the time. Over its entire history, the role of the video game producer has been defined in a wide range of ways by different companies and different teams, and there are a variety of positions within the industry referred to as producer.
There are relatively few superstars of game production that parallel those in film, in part because top producers are usually employed by publishers who choose to play down publicizing their contributions. Unlike many of their counterparts in film or music, these producers do not run their own independent companies.
Types of producers
Most video and computer games are developed by third-party developers. In these cases, there may be external and internal producers. External producers may act as "executive producers" and are employed by the game's publisher. Internal producers work for the developer itself and have more of a hands-on role. Some game developers may have no internal producers, however, and may rely solely on the publisher's producer.
For an internal producer, associate producers tend to specialize in an area of expertise depending on the team they are producing for and what skills they have a background in. These specializations include but are not limited to: programming, design, art, sound, and quality assurance. A normal producer is usually the project manager and is in charge of delivering the product to the publisher on time and on budget. An executive producer will be managing all of the products in the company and making sure that the games are on track to meet their goals and stay within the company's goals and direction.
For an external producer, their job responsibilities may focus mainly on overseeing several projects being worked on by a number of developers. While keeping updated on the progress of the games being developed externally, they inform the upper management of the publisher of the status of the pending projects and any problems they may be experiencing. If a publisher's producer is overseeing a game being developed internally, their role is more akin to that of an internal producer and will generally only work on one game or a few small games.
As games have grown larger and more expensive, line producers have become part of some teams. Based on filmmaking traditions, line producers focus on project scheduling and costing to ensure titles are completed on time and on budget.
Responsibilities
An internal producer is heavily involved in the development of, usually, a single game. Responsibilities for this position vary from company to company, but in general, the person in this position has the following duties:
Negotiating contracts, including licensing deals
Acting as a liaison between the development staff and the upper stakeholders (publisher or executive staff)
Developing and maintaining schedules and budgets
Overseeing creative (art and design) and technical development (game programming) of the game
Ensuring timely delivery of deliverables (such as milestones)
Scheduling timely quality assurance (testing)
Arranging for beta testing and focus groups, if applicable
Arranging for localization
Pitching game ideas to publishers
In short, the internal producer is ultimately responsible for timely delivery and final quality of the game.
For small games, the producer may interact directly with the programming and creative staff. For larger games, the producer will seek the assistance of the lead programmer, art lead, game designer and testing lead. While it is customary for the producer to meet with the entire development staff from time to time, for larger games, they will only meet with the leads on a regular basis to keep updated on the development status. In smaller studios, a producer may fill any slack in the production team by doing the odd job of writing the game manual or producing game assets.
For most games, the producer does not have a large role but does have some influence on the development of the video game design. While not a game designer, the producer has to weave the wishes of the publisher or upper management into the design. They usually seek the assistance of the game designer in this effort. So the final game design is a result the effort of the designer and some influence of the producer.
Compensation
In general, video game producers earn the third most out of game development positions, behind business (management) and programmers.
According to an annual survey of salaries in the industry, producers earn an average of USD$75,000 annually. A video game producer with less than 3 years of experience makes, on average, around $55,000 annually. A video game producer with more than 6 years of experience makes, on average, over $125,000 annually. The salaries of a video game producer will vary depending on the region and the studio.
Education
Most video game producers complete a bachelor's degree program in game design, computer science, digital media or business. Popular computer programming languages for video game development include C, C++, Assembly, C# and Java. Some common courses are communications, mathematics, accounting, art, digital modeling and animation.
Employers typically require three plus years of experience, since a producer has to have gone through the development cycle several times to really understand how unpredictable the business is. The most common path to becoming a video game producer begins by first working as a game tester, then moving up the quality assurance ladder, and then eventually on to production. This is easier to accomplish if one stays with the same studio, reaping the benefits of having built relationships with the production department.
See also
List of video game producers
References
External links
Producer at Eurocom
Justyn McLean - Game Boy at Mirror
Video game producer
Video game industry occupations | Video game producer | Technology | 1,307 |
12,279,653 | https://en.wikipedia.org/wiki/Educator%20Astronaut%20Project | The Educator Astronaut Project is a NASA program to educate students and spur excitement in science, technology, engineering, math, and space exploration. It is a successor to the Teacher in Space Project of the 1980s, which NASA cancelled after the death of teacher-astronaut Christa McAuliffe in the Space Shuttle Challenger disaster (STS-51-L) amid concerns about the risk of sending civilians into space.
History
In the 1990s, NASA created the Educator Astronaut Project, which carries on the objectives of the Teacher in Space Program—seeking to elevate teaching as a profession and inspire students. Unlike the Teacher in Space Program, educator astronauts are fully trained astronauts who do the same jobs and duties that any other astronaut does. They fly as crew members with critical mission responsibilities, as well as education-related goals. In addition to their technical assignments, they assist other astronauts in connecting to students and teachers through space exploration.
Joseph M. Acaba, Richard R. Arnold and Dorothy Metcalf-Lindenburger were selected as the first educator mission specialists in the 2004 class. Both Acaba and Arnold were part of the crew of STS-119, a Space Shuttle mission to the International Space Station (ISS) which was flown by Space Shuttle Discovery in March 2009. Metcalf-Lindenburger flew on STS-131 in April 2010, also visiting the ISS aboard Space Shuttle Discovery.
Barbara Morgan
Barbara Morgan, the backup to Christa McAuliffe in the Teacher in Space Project, remained involved with NASA after the Challenger disaster and continued to work with NASA's Education Division until her selection as a mission specialist in 1998. Morgan completed two years of astronaut training and evaluation, and began official duties in 2000. Morgan became the first former teacher to travel to space on STS-118. While NASA press releases and media briefings often referred to her as a "mission specialist educator" or "educator astronaut", Morgan did not train in the Educator Astronaut Project. NASA Administrator Michael D. Griffin clarified at a press conference after STS-118 that Morgan was not considered a mission specialist educator, but rather was a standard mission specialist, who had once been a teacher. Morgan's duties as a mission specialist were no different from other Shuttle mission specialists.
References
NASA programs
Space Shuttle program
Science education in the United States | Educator Astronaut Project | Astronomy | 463 |
34,160,079 | https://en.wikipedia.org/wiki/Hiroshi%20Tada%20%28engineer%29 | Dr. Hiroshi Tada was a mechanical engineer with highly notable works in the field of fracture mechanics. He was also well known as a performer of a Japanese style of top spinning known as koma-mawashi.
Koma-mawashi performances
Although is traditionally a children's play activity in Japan, Dr. Tada performed this art at an expert level and included in his act elements of juggling, yo-yo and magic, with some comedy thrown in. He was a regular performer at many festivals in the St. Louis, Missouri area, such as the Missouri Botanical Garden's Japanese Festival, Missouri History Museum's International FunFest, Queeny Park's International Folk Fest, and Tower Grove Park's Festival of Nations.
Personal and professional life
Hiroshi Tada was born in Kyushu, Japan. He graduated from the University of Tokyo and moved to the United States to obtain his PhD. He spent most of his life in St. Louis, Missouri. Dr. Tada was an affiliate professor of mechanical engineering at the McKelvey School of Engineering at Washington University in St. Louis and co-author of Stress Analysis of Cracks Handbook.
References
Living people
Mechanical engineers
Jugglers
1939 births
University of Tokyo alumni
Washington University in St. Louis faculty | Hiroshi Tada (engineer) | Engineering | 256 |
3,192,875 | https://en.wikipedia.org/wiki/Pearson%E2%80%93Anson%20effect | The Pearson–Anson effect, discovered in 1922 by Stephen Oswald Pearson and Horatio Saint George Anson, is the phenomenon of an oscillating electric voltage produced by a neon bulb connected across a capacitor, when a direct current is applied through a resistor. This circuit, now called the Pearson-Anson oscillator, neon lamp oscillator, or sawtooth oscillator, is one of the simplest types of relaxation oscillator. It generates a sawtooth output waveform. It has been used in low frequency applications such as blinking warning lights, stroboscopes, tone generators in electronic organs and other electronic music circuits, and in time base generators and deflection circuits of early cathode-ray tube oscilloscopes. Since the development of microelectronics, these simple negative resistance oscillators have been superseded in many applications by more flexible semiconductor relaxation oscillators such as the 555 timer IC.
Neon bulb as a switching device
A neon bulb, often used as an indicator lamp in appliances, consists of a glass bulb containing two electrodes, separated by an inert gas such as neon at low pressure. Its nonlinear current-voltage characteristics (diagram below) allow it to function as a switching device.
When a voltage is applied across the electrodes, the gas conducts almost no electric current until a threshold voltage is reached (point b), called the firing or breakdown voltage, Vb. At this voltage electrons in the gas are accelerated to a high enough speed to knock other electrons off gas atoms, which go on to knock off more electrons in a chain reaction. The gas in the bulb ionizes, starting a glow discharge, and its resistance drops to a low value. In its conducting state the current through the bulb is limited only by the external circuit. The voltage across the bulb drops to a lower voltage called the maintaining voltage Vm. The bulb will continue to conduct current until the applied voltage drops below the extinction voltage Ve (point d), which is usually close to the maintaining voltage. Below this voltage, the current provides insufficient energy to keep the gas ionized, so the bulb switches back to its high resistance, nonconductive state (point a).
The bulb's "turn on" voltage Vb is higher than its "turn off" voltage Ve. This property, called hysteresis, allows the bulb to function as an oscillator. Hysteresis is due to the bulb's negative resistance, the fall in voltage with increasing current after breakdown, which is a property of all gas-discharge lamps.
Up until the 1960s sawtooth oscillators were also built with thyratrons. These were gas-filled triode electron tubes. These worked somewhat similarly to neon bulbs, the tube would not conduct until the cathode to anode voltage reached a breakdown voltage. The advantage of the thyratron was that the breakdown voltage could be controlled by the voltage on the grid. This allowed the frequency of the oscillation to be changed electronically. Thyratron oscillators were used as time base generators in oscilloscopes.
Operation
In the Pearson-Anson oscillator circuit (top) a capacitor C is connected across the neon bulb N The capacitor is continuously charged by current through the resistor R until the bulb conducts, discharging it again, after which it charges up again. The detailed cycle is illustrated by the hysteresis loop abcd on the current-voltage diagram at right:
When the supply voltage is turned on, the neon bulb is in its high resistance condition and acts like an open circuit. The current through the resistor begins to charge the capacitor and its voltage begins to rise toward the supply voltage.
When the voltage across the capacitor reaches b, the breakdown voltage of the bulb Vb, the bulb turns on and its resistance drops to a low value. The charge on the capacitor discharges rapidly through the bulb in a momentary pulse of current (c). When the voltage drops to the extinction voltage Ve of the bulb (d), the bulb turns off and the current through it drops to a low level (a). The current through the resistor begins charging the capacitor up again, and the cycle repeats.
The circuit thus functions as a low-frequency relaxation oscillator, the capacitor voltage oscillating between the breakdown and extinction voltages of the bulb in a sawtooth wave. The period is proportional to the time constant RC.
The neon lamp produces a brief flash of light each time it conducts, so the circuit can also be used as a "flasher" circuit. The dual function of the lamp as both light source and switching device gives the circuit a lower parts count and cost than many alternative flasher circuits.
Conditions for oscillation
The supply voltage VS must be greater than the bulb breakdown voltage Vb or the bulb can never conduct. Most small neon lamps have breakdown voltages between 80 and 150 volts, so they can operate on 120 Vrms mains voltage, which has a peak voltage of about 170 V. If the supply voltage is close to the breakdown voltage, the capacitor voltage will be in the "tail" of its exponential curve by the time it reaches Vb, so the frequency will depend sensitively on the breakdown threshold and supply voltage levels, causing variations in frequency. Therefore, the supply voltage is usually made significantly higher than the bulb firing voltage. This also makes the charging more linear, and the sawtooth wave more triangular.
The resistor R must also be within a certain range of values for the circuit to oscillate. This is illustrated by the load line (blue) on the IV graph. The slope of the load line is equal to R. The possible DC operating points of the circuit are at the intersection of the load line and the neon lamp's IV curve (black) In order for the circuit to be unstable and oscillate, the load line must intersect the IV curve in its negative resistance region, between b and d, where the voltage declines with increasing current. This is defined by the shaded region on the diagram. If the load line crosses the IV curve where it has positive resistance, outside the shaded region, this represents a stable operating point, so the circuit will not oscillate:
If R is too large, of the same order as the "off" leakage resistance of the bulb, the load line will cross the IV curve between the origin and b. In this region, the current through R from the supply is so low that the leakage current through the bulb bleeds it off, so the capacitor voltage never reaches Vb and the bulb never fires. The leakage resistance of most neon bulbs is greater than 100MΩ, so this is not a serious limitation.
If R is too small, the load line will cross the IV curve between c and d. In this region the current through R is too large; once the bulb has turned on, the current through R will be large enough to keep it conducting without current from the capacitor, and the voltage across the bulb will never fall to Ve so the bulb will never turn off.
Small neon bulbs will typically oscillate with values of R between 500kΩ and 20MΩ.
If C is not small, it may be necessary to add a resistor in series with the neon bulb, to limit current through it to prevent damage when the capacitor discharges. This will increase the discharge time and decrease the frequency slightly, but its effect will be negligible at low frequencies.
Frequency
The period of oscillation can be calculated from the breakdown and extinction voltage thresholds of the lamp used. During the charging period, the bulb has high resistance and can be considered an open circuit, so the rest of the oscillator constitutes an RC circuit with the capacitor voltage approaching VS exponentially, with time constant RC. If v(t) is the output voltage across the capacitor
Solving for the time
Although the first period is longer than the others because the voltage starts from zero, the voltage waveforms of subsequent periods are identical to the first between Ve and Vb. So the period T is the interval between the time when the voltage reaches Ve, and the time when the voltage reaches Vb
This formula is only valid for oscillation frequencies up to about 200 Hz; above this various time delays cause the actual frequency to be lower than this. Due to the time required to ionize and deionize the gas, neon lamps are slow switching devices, and the neon lamp oscillator is limited to a top frequency of about 20 kHz.
The breakdown and extinction voltages of neon lamps may vary between similar parts; manufacturers usually specify only wide ranges for these parameters. So if a precise frequency is desired the circuit must be adjusted by trial and error. The thresholds also change with temperature, so the frequency of neon lamp oscillators is not particularly stable.
Forced oscillations and chaotic behavior
Like other relaxation oscillators, the neon bulb oscillator has poor frequency stability, but it can be synchronized (entrained) to an external periodic voltage applied in series with the neon bulb. Even if the external frequency is different from the natural frequency of the oscillator, the peaks of the applied signal can exceed the breakdown threshold of the bulb, discharging the capacitor prematurely, so that the period of the oscillator becomes locked to the applied signal.
Interesting behavior can result from varying the amplitude and frequency of the external voltage. For instance, the oscillator may produce an oscillating voltage whose frequency is a submultiple of the external frequency. This phenomenon is known as "submultiplication" or "demultiplication", and was first observed in 1927 by Balthasar van der Pol and his collaborator Jan van der Mark. In some cases the ratio of the external frequency to the frequency of the oscillation observed in the circuit may be a rational number, or even an irrational one (the latter case is known as the "quasiperiodic" regime). When the periodic and quasiperiodic regimes overlap, the behavior of the circuit may become aperiodic, meaning that the pattern of the oscillations never repeats. This aperiodicity correspond to the behavior of the circuit becoming chaotic (see chaos theory).
The forced neon bulb oscillator was the first system in which chaotic behavior was observed. Van der Pol and van der Mark wrote, concerning their experiments with demultiplication, that
Any periodic oscillation would have produced a musical tone; only aperiodic, chaotic oscillations would produce an "irregular noise". This is thought to have been the first observation of chaos, although van der Pol and van der Mark didn't realize its significance at the time.
See also
Relaxation oscillator
Schmitt trigger
555 timer
Negative resistance
Notes
References
S. O. Pearson and H. St. G. Anson, Demonstration of Some Electrical Properties of Neon-filled Lamps, Proceedings of the Physical Society of London, vol.34, no. 1 (December 1921), pp. 175–176
S. O. Pearson and H. St. G. Anson, The Neon Tube as a Means of Producing Intermittent Currents, Proceedings of the Physical Society of London, vol. 34, no. 1 (December 1921), pp. 204–212
Analog circuits
Electronic oscillators | Pearson–Anson effect | Engineering | 2,378 |
33,849,205 | https://en.wikipedia.org/wiki/International%20Architecture%20Biennale%20of%20S%C3%A3o%20Paulo | The International Architecture Biennale of São Paulo (, BIA), Brazil, is an international exhibition to recognize art and innovation in architecture.
History
The first BIA was held in 1973.
The BIA is held on alternating years from the São Paulo Art Biennial, another major international event hosted by the city. Both biennials used to be held in the Pavilhão Ciccillo Matarazzo building in the Ibirapuera Park (Parque do Ibirapuera). The large pavilion was designed by a team headed by Oscar Niemeyer and Hélio Uchôa.
The theme of the 2011 BIA was "Architecture for All – Building Citizenship". It took place November 1 to December 4, 2011.
The theme of the 2019 BIA was "Everyday". For the first time, the team was selected via an open call and an international jury selected the winning entry. The winning team was composed of Vanessa Grossman, Charlotte Malterre-Barthes, and Ciro Miguel. The 12th Biennale took place from September 10 to December 9, 2019, at Sesc 24 of May and Centro Cultural São Paulo.
See also
Architecture of Brazil
Architectural design competition
References
External links
Architecture festivals
Cultural festivals in Brazil | International Architecture Biennale of São Paulo | Engineering | 249 |
75,674,975 | https://en.wikipedia.org/wiki/Printed%20circuit%20board%20manufacturing | Printed circuit board manufacturing is the process of manufacturing bare printed circuit boards (PCBs) and populating them with electronic components. It includes all the processes to produce the full assembly of a board into a functional circuit board.
In board manufacturing, multiple PCBs are grouped on a single panel for efficient processing. After assembly, they are separated (depaneled). Various techniques, such as silk screening and photoengraving, replicate the desired copper patterns on the PCB layers. Multi-layer boards are created by laminating different layers under heat and pressure. Holes for vias (vertical connections between layers) are also drilled.
The final assembly involves placing components onto the PCB and soldering them in place. This process can include through-hole technology (in which the component goes through the board) or surface-mount technology (SMT) (in which the component lays on top of the board).
Design
Manufacturing starts from the fabrication data generated by computer aided design, and component information. The fabrication data is read into the CAM (Computer Aided Manufacturing) software. CAM performs the following functions:
Input of the fabrication data.
Verification of the data
Compensation for deviations in the manufacturing processes (e.g. scaling to compensate for distortions during lamination)
Panelization
Output of the digital tools (copper patterns, drill files, inspection, and others)
Initially PCBs were designed manually by creating a photomask on a clear mylar sheet, usually at two or four times the true size. Starting from the schematic diagram the component pin pads were laid out on the mylar and then traces were routed to connect the pads. Rub-on dry transfers of common component footprints increased efficiency. Traces were made with self-adhesive tape. Pre-printed non-reproducing grids on the mylar assisted in layout. The finished photomask was photolithographically reproduced onto a photoresist coating on the blank copper-clad boards.
Modern PCBs are designed with dedicated layout software, generally in the following steps:
Schematic capture through an electronic design automation (EDA) tool.
Card dimensions and template are decided based on required circuitry and enclosure of the PCB.
The positions of the components and heat sinks are determined.
Layer stack of the PCB is decided, with one to tens of layers depending on complexity. Ground and power planes are decided. A power plane is the counterpart to a ground plane and behaves as an AC signal ground while providing DC power to the circuits mounted on the PCB. Signal interconnections are traced on signal planes. Signal planes can be on the outer as well as inner layers. For optimal EMI performance high frequency signals are routed in internal layers between power or ground planes.
Line impedance is determined using dielectric layer thickness, routing copper thickness and trace-width. Trace separation is also taken into account in case of differential signals. Microstrip, stripline or dual stripline can be used to route signals.
Components are placed. Thermal considerations and geometry are taken into account. Vias and lands are marked.
Signal traces are routed. Electronic design automation tools usually create clearances and connections in power and ground planes automatically.
Fabrication data consists of a set of Gerber format files, a drill file, and a pick-and-place file.
Panelization
Several small printed circuit boards can be grouped together for processing as a panel. A panel consisting of a design duplicated n-times is also called an n-panel, whereas a multi-panel combines several different designs onto a single panel. The outer tooling strip often includes tooling holes, a set of panel fiducials, a test coupon, and may include hatched copper pour or similar patterns for even copper distribution over the whole panel in order to avoid bending. The assemblers often mount components on panels rather than single PCBs because this is efficient. Panelization may also be necessary for boards with components placed near an edge of the board because otherwise the board could not be mounted during assembly. Most assembly shops require a free area of at least 10 mm around the board.
Depaneling
The panel is eventually broken into individual PCBs along perforations or grooves in the panel through milling or cutting. For milled panels a common distance between the individual boards is 2–3 mm. Today depaneling is often done by lasers which cut the board with no contact. Laser depaneling reduces stress on the fragile circuits, improving the yield of defect-free units.
Copper patterning
The first step is to replicate the pattern in the fabricator's CAM system on a protective mask on the copper foil PCB layers. Subsequent etching removes the unwanted copper unprotected by the mask. (Alternatively, a conductive ink can be ink-jetted on a blank (non-conductive) board. This technique is also used in the manufacture of hybrid circuits.)
Silk screen printing uses etch-resistant inks to create the protective mask.
Photoengraving uses a photomask and developer to selectively remove a UV-sensitive photoresist coating and thus create a photoresist mask that will protect the copper below it. Direct imaging techniques are sometimes used for high-resolution requirements. Experiments have been made with thermal resist. A laser may be used instead of a photomask. This is known as maskless lithography or direct imaging.
PCB milling uses a two or three-axis mechanical milling system to mill away the copper foil from the substrate. A PCB milling machine (referred to as a 'PCB Prototyper') operates in a similar way to a plotter, receiving commands from the host software that control the position of the milling head in the x, y, and (if relevant) z axis.
Laser resist ablation involves spraying black paint onto copper clad laminate, then placing the board into CNC laser plotter. The laser raster-scans the PCB and ablates (vaporizes) the paint where no resist is wanted. (Note: laser copper ablation is rarely used and is considered experimental.)
Laser etching, in which the copper may be removed directly by a CNC laser. Like PCB milling above, this is used mainly for prototyping.
EDM etching uses an electrical discharge to remove a metal from a substrate submerged into a dielectric fluid.
The method chosen depends on the number of boards to be produced and the required resolution.
Large volume
Silk screen printing – Used for PCBs with bigger features
Photoengraving – Used when finer features are required
Small volume
Print onto transparent film and use as photo mask along with photo-sensitized boards, then etch. (Alternatively, use a film photoplotter.)
Laser resist ablation
PCB milling
Laser etching
Hobbyist
Laser-printed resist: Laser-print onto toner transfer paper, heat-transfer with an iron or modified laminator onto bare laminate, soak in water bath, touch up with a marker, then etch.
Vinyl film and resist, non-washable marker, some other methods. Labor-intensive, only suitable for single boards.
Etching
The process by which copper traces are applied to the surface is known as etching after the subtractive method of the process, though there are also additive and semi-additive methods.
Subtractive methods remove copper from an entirely copper-coated board to leave only the desired copper pattern. The simplest method, used for small-scale production and often by hobbyists, is immersion etching, in which the board is submerged in etching solution such as ferric chloride. Compared with methods used for mass production, the etching time is long. Heat and agitation can be applied to the bath to speed the etching rate. In bubble etching, air is passed through the etchant bath to agitate the solution and speed up etching. Splash etching uses a motor-driven paddle to splash boards with etchant; the process has become commercially obsolete since it is not as fast as spray etching. In spray etching, the etchant solution is distributed over the boards by nozzles, and recirculated by pumps. Adjustment of the nozzle pattern, flow rate, temperature, and etchant composition gives predictable control of etching rates and high production rates. As more copper is consumed from the boards, the etchant becomes saturated and less effective; different etchants have different capacities for copper, with some as high as 150 grams of copper per liter of solution. In commercial use, etchants can be regenerated to restore their activity, and the dissolved copper recovered and sold. Small-scale etching requires attention to disposal of used etchant, which is corrosive and toxic due to its metal content. The etchant removes copper on all surfaces not protected by the resist. "Undercut" occurs when etchant attacks the thin edge of copper under the resist; this can reduce conductor widths and cause open-circuits. Careful control of etch time is required to prevent undercut. Where metallic plating is used as a resist, it can "overhang" which can cause short circuits between adjacent traces when closely spaced. Overhang can be removed by wire-brushing the board after etching.
In additive methods the pattern is electroplated onto a bare substrate using a complex process. The advantage of the additive method is that less material is needed and less waste is produced. In the full additive process the bare laminate is covered with a photosensitive film which is imaged (exposed to light through a mask and then developed which removes the unexposed film). The exposed areas are sensitized in a chemical bath, usually containing palladium and similar to that used for through hole plating which makes the exposed area capable of bonding metal ions. The laminate is then plated with copper in the sensitized areas. When the mask is stripped, the PCB is finished.
Semi-additive is the most common process: The unpatterned board has a thin layer of copper already on it. A reverse mask is then applied (Unlike a subtractive process mask, this mask exposes those parts of the substrate that will eventually become the traces). Additional copper is then plated onto the board in the unmasked areas; copper may be plated to any desired weight. Tin-lead or other surface platings are then applied. The mask is stripped away and a brief etching step removes the now-exposed bare original copper laminate from the board, isolating the individual traces. Some single-sided boards which have plated-through holes are made in this way. General Electric made consumer radio sets in the late 1960s using additive boards. The (semi-)additive process is commonly used for multi-layer boards as it facilitates the plating-through of the holes to produce conductive vias in the circuit board.
Industrial etching is usually done with ammonium persulfate or ferric chloride. For PTH (plated-through holes), additional steps of electroless deposition are done after the holes are drilled, then copper is electroplated to build up the thickness, the boards are screened, and plated with tin/lead. The tin/lead becomes the resist leaving the bare copper to be etched away.
Lamination
Multi-layer printed circuit boards have trace layers inside the board. This is achieved by laminating a stack of materials in a press by applying pressure and heat for a period of time. This results in an inseparable one piece product. For example, a four-layer PCB can be fabricated by starting from a two-sided copper-clad laminate, etch the circuitry on both sides, then laminate to the top and bottom pre-preg and copper foil. It is then drilled, plated, and etched again to get traces on top and bottom layers.
The inner layers are given a complete machine inspection before lamination because mistakes cannot be corrected afterwards. Automatic optical inspection (AOI) machines compare an image of the board with the digital image generated from the original design data. Automated Optical Shaping (AOS) machines can then add missing copper or remove excess copper using a laser, reducing the number of PCBs that have to be discarded. PCB tracks can have a width of just 10 micrometers.
Drilling
Holes through a PCB are typically drilled with drill bits coated with tungsten carbide. Coated tungsten carbide is used because board materials are abrasive. High-speed-steel bits would dull quickly, tearing the copper and ruining the board. Drilling is done by computer-controlled drilling machines, using a drill file or Excellon file that describes the location and size of each drilled hole.
Vias
Holes may be made conductive, by electroplating or inserting hollow metal eyelets, to connect board layers. Some conductive holes are intended for the insertion of through-hole-component leads. Others used to connect board layers, are called vias.
Micro vias
When vias with a diameter smaller than 76.2 micrometers are required, drilling with mechanical bits is impossible because of high rates of wear and breakage. In this case, the vias may be laser drilled—evaporated by lasers. Laser-drilled vias typically have an inferior surface finish inside the hole. These holes are called micro vias and can have diameters as small as 10 micrometers.
Blind and buried vias
It is also possible with controlled-depth drilling, laser drilling, or by pre-drilling the individual sheets of the PCB before lamination, to produce holes that connect only some of the copper layers, rather than passing through the entire board. These holes are called blind vias when they connect an internal copper layer to an outer layer, or buried vias when they connect two or more internal copper layers and no outer layers. Laser drilling machines can drill thousands of holes per second and can use either UV or lasers.
The hole walls for boards with two or more layers can be made conductive and then electroplated with copper to form plated-through holes. These holes electrically connect the conducting layers of the PCB.
Smear
For multi-layer boards, those with three layers or more, drilling typically produces a smear of the high temperature decomposition products of bonding agent in the laminate system. Before the holes can be plated through, this smear must be removed by a chemical de-smear process, or by Plasma etching. The de-smear process ensures that a good connection is made to the copper layers when the hole is plated through. On high reliability boards a process called etch-back is performed chemically with a potassium permanganate based etchant or plasma etching. The etch-back removes resin and the glass fibers so that the copper layers extend into the hole and as the hole is plated become integral with the deposited copper.
Plating and coating
Proper plating or surface finish selection can be critical to process yield, the amount of rework, field failure rate, and reliability.
PCBs may be plated with solder, tin, or gold over nickel.
After PCBs are etched and then rinsed with water, the solder mask is applied, and then any exposed copper is coated with solder, nickel/gold, or some other anti-corrosion coating.
It is important to use solder compatible with both the PCB and the parts used. An example is ball grid array (BGA) using tin-lead solder balls for connections losing their balls on bare copper traces or using lead-free solder paste.
Other platings used are organic solderability preservative (OSP), immersion silver (IAg), immersion tin (ISn), electroless nickel immersion gold (ENIG) coating, electroless nickel electroless palladium immersion gold (ENEPIG), and direct gold plating (over nickel). Edge connectors, placed along one edge of some boards, are often nickel-plated then gold-plated using ENIG. Another coating consideration is rapid diffusion of coating metal into tin solder. Tin forms intermetallics such as Cu6Sn5 and Ag3Cu that dissolve into the Tin liquidus or solidus (at 50 °C), stripping surface coating or leaving voids.
Electrochemical migration (ECM) is the growth of conductive metal filaments on or in a printed circuit board (PCB) under the influence of a DC voltage bias. Silver, zinc, and aluminum are known to grow whiskers under the influence of an electric field. Silver also grows conducting surface paths in the presence of halide and other ions, making it a poor choice for electronics use. Tin will grow "whiskers" due to tension in the plated surface. Tin-lead or solder plating also grows whiskers, only reduced by reducing the percentage of tin. Reflow to melt solder or tin plate to relieve surface stress lowers whisker incidence. Another coating issue is tin pest, the transformation of tin to a powdery allotrope at low temperature.
Solder resist application
Areas that should not be soldered may be covered with solder resist (solder mask). The solder mask is what gives PCBs their characteristic green color, although it is also available in several other colors, such as red, blue, purple, yellow, black and white. One of the most common solder resists used today is called "LPI" (liquid photoimageable solder mask). A photo-sensitive coating is applied to the surface of the PWB, then exposed to light through the solder mask image film, and finally developed where the unexposed areas are washed away. Dry film solder mask is similar to the dry film used to image the PWB for plating or etching. After being laminated to the PWB surface it is imaged and developed as LPI. Once but no longer commonly used, because of its low accuracy and resolution, is to screen print epoxy ink. In addition to repelling solder, solder resist also provides protection from the environment to the copper that would otherwise be exposed.
Legend / silkscreen
A legend (also known as silk or silkscreen) is often printed on one or both sides of the PCB. It contains the component designators, switch settings, test points and other indications helpful in assembling, testing, servicing, and sometimes using the circuit board.
There are three methods to print the legend:
Silkscreen printing epoxy ink was the established method, resulting in the alternative name.
Liquid photo imaging is a more accurate method than screen printing.
Inkjet printing is increasingly used. Inkjet printers can print variable data, unique to each PCB unit, such as text, a serial number, or a bar code.
Bare-board test
Boards with no components installed are usually bare-board tested for "shorts" and "opens". This is called electrical test or PCB e-test. A short is a connection between two points that should not be connected. An open is a missing connection between points that should be connected. For high-volume testing, a rigid needle adapter makes contact with copper lands on the board. The fixture or adapter is a significant fixed cost and this method is only economical for high-volume or high-value production. For small or medium volume production flying probe testers are used where test probes are moved over the board by an XY drive to make contact with the copper lands. There is no need for a fixture and hence the fixed costs are much lower. The CAM system instructs the electrical tester to apply a voltage to each contact point as required and to check that this voltage appears on the appropriate contact points and only on these.
Assembly
In assembly the bare board is populated (or "stuffed") with electronic components to form a functional printed circuit assembly (PCA), sometimes called a "printed circuit board assembly" (PCBA). In through-hole technology, the component leads are inserted in holes surrounded by conductive pads; the holes keep the components in place. In surface-mount technology (SMT), the component is placed on the PCB so that the pins line up with the conductive pads or lands on the surfaces of the PCB; solder paste, which was previously applied to the pads, holds the components in place temporarily; if surface-mount components are applied to both sides of the board, the bottom-side components are glued to the board. In both through hole and surface mount, the components are then soldered; once cooled and solidified, the solder holds the components in place permanently and electrically connects them to the board.
There are a variety of soldering techniques used to attach components to a PCB. High volume production is usually done with a pick-and-place machine and bulk wave soldering for through-hole parts or reflow ovens for SMT components or through-hole parts, but skilled technicians are able to hand-solder very tiny parts (for instance 0201 packages which are 0.02 in. by 0.01 in.) under a microscope, using tweezers and a fine-tip soldering iron, for small volume prototypes. Selective soldering may be used for delicate parts. Some SMT parts cannot be soldered by hand, such as ball grid array (BGA) packages. All through-hole components can be hand soldered, making them favored for prototyping where size, weight, and the use of the exact components that would be used in high volume production are not concerns.
Often, through-hole and surface-mount construction must be combined in a single assembly because some required components are available only in surface-mount packages, while others are available only in through-hole packages. Or, even if all components are available in through-hole packages, it might be desired to take advantage of the size, weight, and cost reductions obtainable by using some available surface-mount devices. Another reason to use both methods is that through-hole mounting can provide needed strength for components likely to endure physical stress (such as connectors that are frequently mated and demated or that connect to cables expected to impart substantial stress to the PCB-and-connector interface), while components that are expected to go untouched will take up less space using surface-mount techniques. For further comparison, see the SMT page.
After the board has been populated it may be tested in a variety of ways:
While the power is off, visual inspection, automated optical inspection. JEDEC guidelines for PCB component placement, soldering, and inspection are commonly used to maintain quality control in this stage of PCB manufacturing.
While the power is off, analog signature analysis, power-off testing.
While the power is on, in-circuit test, where physical measurements (for example, voltage) can be done.
While the power is on, functional test, just checking if the PCB does what it had been designed to do.
To facilitate these tests, PCBs may be designed with extra pads to make temporary connections. Sometimes these pads must be isolated with resistors. The in-circuit test may also exercise boundary scan test features of some components. In-circuit test systems may also be used to program nonvolatile memory components on the board.
In boundary scan testing, test circuits integrated into various ICs on the board form temporary connections between the PCB traces to test that the ICs are mounted correctly. Boundary scan testing requires that all the ICs to be tested use a standard test configuration procedure, the most common one being the Joint Test Action Group (JTAG) standard. The JTAG test architecture provides a means to test interconnects between integrated circuits on a board without using physical test probes, by using circuitry in the ICs to employ the IC pins themselves as test probes. JTAG tool vendors provide various types of stimuli and sophisticated algorithms, not only to detect the failing nets, but also to isolate the faults to specific nets, devices, and pins.
When boards fail the test, technicians may desolder and replace failed components, a task known as rework.
Protection and packaging
PCBs intended for extreme environments often have a conformal coating, which is applied by dipping or spraying after the components have been soldered. The coat prevents corrosion and leakage currents or shorting due to condensation. The earliest conformal coats were wax; modern conformal coats are usually dips of dilute solutions of silicone rubber, polyurethane, acrylic, or epoxy. Another technique for applying a conformal coating is for plastic to be sputtered onto the PCB in a vacuum chamber. The chief disadvantage of conformal coatings is that servicing of the board is rendered extremely difficult.
Many assembled PCBs are static sensitive, and therefore they must be placed in antistatic bags during transport. When handling these boards, the user must be grounded (earthed). Improper handling techniques might transmit an accumulated static charge through the board, damaging or destroying components. The damage might not immediately affect function but might lead to early failure later on, cause intermittent operating faults, or cause a narrowing of the range of environmental and electrical conditions under which the board functions properly.
See also
Reference list
Electronics manufacturing
Printed circuit board manufacturing | Printed circuit board manufacturing | Engineering | 5,225 |
23,360,896 | https://en.wikipedia.org/wiki/Doob%20decomposition%20theorem | In the theory of stochastic processes in discrete time, a part of the mathematical theory of probability, the Doob decomposition theorem gives a unique decomposition of every adapted and integrable stochastic process as the sum of a martingale and a predictable process (or "drift") starting at zero. The theorem was proved by and is named for Joseph L. Doob.
The analogous theorem in the continuous-time case is the Doob–Meyer decomposition theorem.
Statement
Let be a probability space, with or a finite or countably infinite index set, a filtration of , and an adapted stochastic process with for all . Then there exist a martingale and an integrable predictable process starting with such that for every .
Here predictable means that is -measurable for every .
This decomposition is almost surely unique.
Remark
The theorem is valid word for word also for stochastic processes taking values in the -dimensional Euclidean space or the complex vector space . This follows from the one-dimensional version by considering the components individually.
Proof
Existence
Using conditional expectations, define the processes and , for every , explicitly by
and
where the sums for are empty and defined as zero. Here adds up the expected increments of , and adds up the surprises, i.e., the part of every that is not known one time step before.
Due to these definitions, (if ) and are -measurable because the process is adapted, and because the process is integrable, and the decomposition is valid for every . The martingale property
a.s.
also follows from the above definition (), for every }.
Uniqueness
To prove uniqueness, let be an additional decomposition. Then the process is a martingale, implying that
a.s.,
and also predictable, implying that
a.s.
for any }. Since by the convention about the starting point of the predictable processes, this implies iteratively that almost surely for all , hence the decomposition is almost surely unique.
Corollary
A real-valued stochastic process is a submartingale if and only if it has a Doob decomposition into a martingale and an integrable predictable process that is almost surely increasing. It is a supermartingale, if and only if is almost surely decreasing.
Proof
If is a submartingale, then
a.s.
for all }, which is equivalent to saying that every term in definition () of is almost surely positive, hence is almost surely increasing. The equivalence for supermartingales is proved similarly.
Example
Let be a sequence in independent, integrable, real-valued random variables. They are adapted to the filtration generated by the sequence, i.e. for all . By () and (), the Doob decomposition is given by
and
If the random variables of the original sequence have mean zero, this simplifies to
and
hence both processes are (possibly time-inhomogeneous) random walks. If the sequence consists of symmetric random variables taking the values and , then is bounded, but the martingale and the predictable process are unbounded simple random walks (and not uniformly integrable), and Doob's optional stopping theorem might not be applicable to the martingale unless the stopping time has a finite expectation.
Application
In mathematical finance, the Doob decomposition theorem can be used to determine the largest optimal exercise time of an American option. Let denote the non-negative, discounted payoffs of an American option in a -period financial market model, adapted to a filtration , and let denote an equivalent martingale measure. Let denote the Snell envelope of with respect to . The Snell envelope is the smallest -supermartingale dominating and in a complete financial market it represents the minimal amount of capital necessary to hedge the American option up to maturity. Let denote the Doob decomposition with respect to of the Snell envelope into a martingale and a decreasing predictable process with . Then the largest stopping time to exercise the American option in an optimal way is
Since is predictable, the event } is in for every }, hence is indeed a stopping time. It gives the last moment before the discounted value of the American option will drop in expectation; up to time the discounted value process is a martingale with respect to .
Generalization
The Doob decomposition theorem can be generalized from probability spaces to σ-finite measure spaces.
Citations
References
Theorems regarding stochastic processes
Martingale theory
Articles containing proofs | Doob decomposition theorem | Mathematics | 925 |
176,399 | https://en.wikipedia.org/wiki/Zeeman%20effect | The Zeeman effect ( , ) is the splitting of a spectral line into several components in the presence of a static magnetic field. It is caused by interaction of the magnetic field with the magnetic moment of the atomic electron associated to its orbital motion and spin; this interaction shifts some orbital energies more than others, resulting in the split spectrum. The effect is named after the Dutch physicist Pieter Zeeman, who discovered it in 1896 and received a Nobel Prize in Physics for this discovery. It is analogous to the Stark effect, the splitting of a spectral line into several components in the presence of an electric field. Also similar to the Stark effect, transitions between different components have, in general, different intensities, with some being entirely forbidden (in the dipole approximation), as governed by the selection rules.
Since the distance between the Zeeman sub-levels is a function of magnetic field strength, this effect can be used to measure magnetic field strength, e.g. that of the Sun and other stars or in laboratory plasmas.
Discovery
In 1896 Zeeman learned that his laboratory had one of Henry Augustus Rowland's highest resolving diffraction gratings. Zeeman had read James Clerk Maxwell's article in Encyclopædia Britannica describing Michael Faraday's failed attempts to influence light with magnetism. Zeeman wondered if the new spectrographic techniques could succeed where early efforts had not.
When illuminated by a slit shaped source, the grating produces a long array of slit images corresponding to different wavelengths. Zeeman placed a piece of asbestos soaked in salt water into a Bunsen burner flame at the source of the grating: he could easily see two lines for sodium light emission. Energizing a 10 kilogauss magnet around the flame he observed a slight broadening of the sodium images.
When Zeeman switched to cadmium as the source he observed the images split when the magnet was energized. These splittings could be analyzed with Hendrik Lorentz's then-new electron theory. In retrospect we now know that the magnetic effects on sodium require quantum mechanical treatment. Zeeman and Lorentz were awarded the 1902 Nobel prize; in his acceptance speech Zeeman explained his apparatus and showed slides of the spectrographic images.
Nomenclature
Historically, one distinguishes between the normal and an anomalous Zeeman effect (discovered by Thomas Preston in Dublin, Ireland). The anomalous effect appears on transitions where the net spin of the electrons is non-zero. It was called "anomalous" because the electron spin had not yet been discovered, and so there was no good explanation for it at the time that Zeeman observed the effect. Wolfgang Pauli recalled that when asked by a colleague as to why he looked unhappy, he replied, "How can one look happy when he is thinking about the anomalous Zeeman effect?"
At higher magnetic field strength the effect ceases to be linear. At even higher field strengths, comparable to the strength of the atom's internal field, the electron coupling is disturbed and the spectral lines rearrange. This is called the Paschen–Back effect.
In the modern scientific literature, these terms are rarely used, with a tendency to use just the "Zeeman effect". Another rarely used obscure term is inverse Zeeman effect, referring to the Zeeman effect in an absorption spectral line.
A similar effect, splitting of the nuclear energy levels in the presence of a magnetic field, is referred to as the nuclear Zeeman effect.
Theoretical presentation
The total Hamiltonian of an atom in a magnetic field is
where is the unperturbed Hamiltonian of the atom, and is the perturbation due to the magnetic field:
where is the magnetic moment of the atom. The magnetic moment consists of the electronic and nuclear parts; however, the latter is many orders of magnitude smaller and will be neglected here. Therefore,
where is the Bohr magneton, is the total electronic angular momentum, and is the Landé g-factor.
A more accurate approach is to take into account that the operator of the magnetic moment of an electron is a sum of the contributions of the orbital angular momentum and the spin angular momentum , with each multiplied by the appropriate gyromagnetic ratio:
where and (the latter is called the anomalous gyromagnetic ratio; the deviation of the value from 2 is due to the effects of quantum electrodynamics). In the case of the LS coupling, one can sum over all electrons in the atom:
where and are the total spin momentum and spin of the atom, and averaging is done over a state with a given value of the total angular momentum.
If the interaction term is small (less than the fine structure), it can be treated as a perturbation; this is the Zeeman effect proper. In the Paschen–Back effect, described below, exceeds the LS coupling significantly (but is still small compared to ). In ultra-strong magnetic fields, the magnetic-field interaction may exceed , in which case the atom can no longer exist in its normal meaning, and one talks about Landau levels instead. There are intermediate cases which are more complex than these limit cases.
Weak field (Zeeman effect)
If the spin–orbit interaction dominates over the effect of the external magnetic field, and are not separately conserved, only the total angular momentum is. The spin and orbital angular momentum vectors can be thought of as precessing about the (fixed) total angular momentum vector . The (time-)"averaged" spin vector is then the projection of the spin onto the direction of :
and for the (time-)"averaged" orbital vector:
Thus,
Using and squaring both sides, we get
and:
using and squaring both sides, we get
Combining everything and taking , we obtain the magnetic potential energy of the atom in the applied external magnetic field,
where the quantity in square brackets is the Landé g-factor gJ of the atom ( and ) and is the z-component of the total angular momentum.
For a single electron above filled shells and , the Landé g-factor can be simplified into:
Taking to be the perturbation, the Zeeman correction to the energy is
Example: Lyman-alpha transition in hydrogen
The Lyman-alpha transition in hydrogen in the presence of the spin–orbit interaction involves the transitions
and
In the presence of an external magnetic field, the weak-field Zeeman effect splits the 1S1/2 and 2P1/2 levels into 2 states each () and the 2P3/2 level into 4 states (). The Landé g-factors for the three levels are:
for (j=1/2, l=0)
for (j=1/2, l=1)
for (j=3/2, l=1).
Note in particular that the size of the energy splitting is different for the different orbitals, because the gJ values are different. On the left, fine structure splitting is depicted. This splitting occurs even in the absence of a magnetic field, as it is due to spin–orbit coupling. Depicted on the right is the additional Zeeman splitting, which occurs in the presence of magnetic fields.
Strong field (Paschen–Back effect)
The Paschen–Back effect is the splitting of atomic energy levels in the presence of a strong magnetic field. This occurs when an external magnetic field is sufficiently strong to disrupt the coupling between orbital () and spin () angular momenta. This effect is the strong-field limit of the Zeeman effect. When , the two effects are equivalent. The effect was named after the German physicists Friedrich Paschen and Ernst E. A. Back.
When the magnetic-field perturbation significantly exceeds the spin–orbit interaction, one can safely assume . This allows the expectation values of and to be easily evaluated for a state . The energies are simply
The above may be read as implying that the LS-coupling is completely broken by the external field. However and are still "good" quantum numbers. Together with the selection rules for an electric dipole transition, i.e., this allows to ignore the spin degree of freedom altogether. As a result, only three spectral lines will be visible, corresponding to the selection rule. The splitting is independent of the unperturbed energies and electronic configurations of the levels being considered.
More precisely, if , each of these three components is actually a group of several transitions due to the residual spin–orbit coupling and relativistic corrections (which are of the same order, known as 'fine structure'). The first-order perturbation theory with these corrections yields the following formula for the hydrogen atom in the Paschen–Back limit:
Example: Lyman-alpha transition in hydrogen
In this example, the fine-structure corrections are ignored.
Intermediate field for j = 1/2
In the magnetic dipole approximation, the Hamiltonian which includes both the hyperfine and Zeeman interactions is
where is the hyperfine splitting (in Hz) at zero applied magnetic field, and are the Bohr magneton and nuclear magneton respectively, and are the electron and nuclear angular momentum operators and is the Landé g-factor:
In the case of weak magnetic fields, the Zeeman interaction can be treated as a perturbation to the basis. In the high field regime, the magnetic field becomes so strong that the Zeeman effect will dominate, and one must use a more complete basis of or just since and will be constant within a given level.
To get the complete picture, including intermediate field strengths, we must consider eigenstates which are superpositions of the and basis states. For , the Hamiltonian can be solved analytically, resulting in the Breit–Rabi formula (named after Gregory Breit and Isidor Isaac Rabi). Notably, the electric quadrupole interaction is zero for (), so this formula is fairly accurate.
We now utilize quantum mechanical ladder operators, which are defined for a general angular momentum operator as
These ladder operators have the property
as long as lies in the range (otherwise, they return zero). Using ladder operators and
We can rewrite the Hamiltonian as
We can now see that at all times, the total angular momentum projection will be conserved. This is because both and leave states with definite and unchanged, while and either increase and decrease or vice versa, so the sum is always unaffected. Furthermore, since there are only two possible values of which are . Therefore, for every value of there are only two possible states, and we can define them as the basis:
This pair of states is a two-level quantum mechanical system. Now we can determine the matrix elements of the Hamiltonian:
Solving for the eigenvalues of this matrix – as can be done by hand (see two-level quantum mechanical system), or more easily, with a computer algebra system – we arrive at the energy shifts:
where is the splitting (in units of Hz) between two hyperfine sublevels in the absence of magnetic field ,
is referred to as the 'field strength parameter' (Note: for the expression under the square root is an exact square, and so the last term should be replaced by ). This equation is known as the Breit–Rabi formula and is useful for systems with one valence electron in an () level.
Note that index in should be considered not as total angular momentum of the atom but as asymptotic total angular momentum. It is equal to total angular momentum only if
otherwise eigenvectors corresponding different eigenvalues of the Hamiltonian are the superpositions of states with different but equal (the only exceptions are ).
Applications
Astrophysics
George Ellery Hale was the first to notice the Zeeman effect in the solar spectra, indicating the existence of strong magnetic fields in sunspots. Such fields can be quite high, on the order of 0.1 tesla or higher. Today, the Zeeman effect is used to produce magnetograms showing the variation of magnetic field on the Sun, and to analyse the magnetic field geometries in other stars.
Laser cooling
The Zeeman effect is utilized in many laser cooling applications such as a magneto-optical trap and the Zeeman slower.
Spintronics
Zeeman-energy mediated coupling of spin and orbital motions
is used in spintronics for controlling electron spins in quantum dots through electric dipole spin resonance.
Metrology
Old high-precision frequency standards, i.e. hyperfine structure transition-based atomic clocks, may require periodic fine-tuning due to exposure to magnetic fields. This is carried out by measuring the Zeeman effect on specific hyperfine structure transition levels of the source element (cesium) and applying a uniformly precise, low-strength magnetic field to said source, in a process known as degaussing.
The Zeeman effect may also be utilized to improve accuracy in atomic absorption spectroscopy.
Biology
A theory about the magnetic sense of birds assumes that a protein in the retina is changed due to the Zeeman effect.
Nuclear spectroscopy
The nuclear Zeeman effect is important in such applications as nuclear magnetic resonance spectroscopy, magnetic resonance imaging (MRI), and Mössbauer spectroscopy.
Other
The electron spin resonance spectroscopy is based on the Zeeman effect.
Demonstrations
The Zeeman effect can be demonstrated by placing a sodium vapor source in a powerful electromagnet and viewing a sodium vapor lamp through the magnet opening (see diagram). With magnet off, the sodium vapor source will block the lamp light; when the magnet is turned on the lamp light will be visible through the vapor.
The sodium vapor can be created by sealing sodium metal in an evacuated glass tube and heating it while the tube is in the magnet.
Alternatively, salt (sodium chloride) on a ceramic stick can be placed in the flame of Bunsen burner as the sodium vapor source. When the magnetic field is energized, the lamp image will be brighter. However, the magnetic field also affects the flame, making the observation depend upon more than just the Zeeman effect. These issues also plagued Zeeman's original work; he devoted considerable effort to ensure his observations were truly an effect of magnetism on light emission.
When salt is added to the Bunsen burner, it dissociates to give sodium and chloride. The sodium atoms get excited due to photons from the sodium vapour lamp, with electrons excited from 3s to 3p states, absorbing light in the process. The sodium vapour lamp emits light at 589nm, which has precisely the energy to excite an electron of a sodium atom. If it was an atom of another element, like chlorine, shadow will not be formed. When a magnetic field is applied, due to the Zeeman effect the spectral line of sodium gets split into several components. This means the energy difference between the 3s and 3p atomic orbitals will change. As the sodium vapour lamp don't precisely deliver the right frequency any more, light doesn't get absorbed and passes through, resulting in the shadow dimming. As the magnetic field strength is increased, the shift in the spectral lines increases and lamp light is transmitted.
See also
Magneto-optic Kerr effect
Voigt effect
Faraday effect
Cotton–Mouton effect
Polarization spectroscopy
Zeeman energy
Stark effect
Lamb shift
References
Historical
(Chapter 16 provides a comprehensive treatment, as of 1935.)
Modern
External links
Zeeman effect-Control light with magnetic fields
Spectroscopy
Quantum magnetism
Foundational quantum physics
Articles containing video clips
Magneto-optic effects | Zeeman effect | Physics,Chemistry,Materials_science | 3,179 |
1,065,357 | https://en.wikipedia.org/wiki/Mental%20status%20examination | The mental status examination (MSE) is an important part of the clinical assessment process in neurological and psychiatric practice. It is a structured way of observing and describing a patient's psychological functioning at a given point in time, under the domains of appearance, attitude, behavior, mood and affect, speech, thought process, thought content, perception, cognition, insight, and judgment. There are some minor variations in the subdivision of the MSE and the sequence and names of MSE domains.
The purpose of the MSE is to obtain a comprehensive cross-sectional description of the patient's mental state, which, when combined with the biographical and historical information of the psychiatric history, allows the clinician to make an accurate diagnosis and formulation, which are required for coherent treatment planning.
The data are collected through a combination of direct and indirect means: unstructured observation while obtaining the biographical and social information, focused questions about current symptoms, and formalised psychological tests.
The MSE is not to be confused with the mini–mental state examination (MMSE), which is a brief neuropsychological screening test for dementia.
Theoretical foundations
The MSE derives from an approach to psychiatry known as descriptive psychopathology or descriptive phenomenology, which developed from the work of the philosopher and psychiatrist Karl Jaspers. From Jaspers' perspective it was assumed that the only way to comprehend a patient's experience is through his or her own description (through an approach of empathic and non-theoretical enquiry), as distinct from an interpretive or psychoanalytic approach which assumes the analyst might understand experiences or processes of which the patient is unaware, such as defense mechanisms or unconscious drives.
In practice, the MSE is a blend of empathic descriptive phenomenology and empirical clinical observation. It has been argued that the term phenomenology has become corrupted in clinical psychiatry: current usage, as a set of supposedly objective descriptions of a psychiatric patient (a synonym for signs and symptoms), is incompatible with the original meaning which was concerned with comprehending a patient's subjective experience.
Application
The mental status examination is a core skill of qualified (mental) health personnel. It is a key part of the initial psychiatric assessment in an outpatient or psychiatric hospital setting.
It is a systematic collection of data based on observation of the patient's behavior while the patient is in the clinician's view during the interview. The purpose is to obtain evidence of symptoms and signs of mental disorders, including danger to self and others, that are present at the time of the interview. Further, information on the patient's insight, judgment, and capacity for abstract reasoning is used to inform decisions about treatment strategy and the choice of an appropriate treatment setting.
It is carried out in the manner of an informal enquiry, using a combination of open and closed questions, supplemented by structured tests to assess cognition. The MSE can also be considered part of the comprehensive physical examination performed by physicians and nurses although it may be performed in a cursory and abbreviated way in non-mental-health settings. Information is usually recorded as free-form text using the standard headings, but brief MSE checklists are available for use in emergency situations, for example, by paramedics or emergency department staff.
The information obtained in the MSE is used, together with the biographical and social information of the psychiatric history, to generate a diagnosis, a psychiatric formulation and a treatment plan.
Domains
The mnemonic ASEPTIC can be used to remember the domains of the MSE:
A - Appearance/Behavior
S - Speech
E - Emotion (Mood and Affect)
P - Perception
T - Thought Content and Process
I - Insight and Judgement
C - Cognition
Appearance
Clinicians assess the physical aspects such as the appearance of a patient, including apparent age, height, weight, and manner of dress and grooming. Colorful or bizarre clothing might suggest mania, while unkempt, dirty clothes might suggest schizophrenia or depression. If the patient appears much older than his or her chronological age this can suggest chronic poor self-care or ill-health. Clothing and accessories of a particular subculture, body modifications, or clothing not typical of the patient's gender, might give clues to personality. Observations of physical appearance might include the physical features of alcoholism or drug abuse, such as signs of malnutrition, nicotine stains, dental erosion, a rash around the mouth from inhalant abuse, or needle track marks from intravenous drug abuse. Observations can also include any odor which might suggest poor personal hygiene due to extreme self-neglect, or alcohol intoxication. Weight loss could also signify a depressive disorder, physical illness, anorexia nervosa or chronic anxiety.
Attitude
Attitude, also known as rapport or cooperation, refers to the patient's approach to the interview process and the quality of information obtained during the assessment. Observations of attitude include whether the patient is cooperative, hostile, open or secretive.
Behavior
Abnormalities of behavior, also called abnormalities of activity, include observations of specific abnormal movements, as well as more general observations of the patient's level of activity and arousal, and observations of the patient's eye contact and gait. Abnormal movements, for example choreiform, athetoid or choreoathetoid movements may indicate a neurological disorder. A tremor or dystonia may indicate a neurological condition or the side effects of antipsychotic medication. The patient may have tics (involuntary but quasi-purposeful movements or vocalizations) which may be a symptom of Tourette's syndrome. There are a range of abnormalities of movement which are typical of catatonia, such as echopraxia, catalepsy, waxy flexibility and paratonia (or gegenhalten). Stereotypies (repetitive purposeless movements such as rocking or head banging) or mannerisms (repetitive quasi-purposeful abnormal movements such as a gesture or abnormal gait) may be a feature of chronic schizophrenia or autism.
More global behavioural abnormalities may be noted, such as an increase in arousal and movement (described as psychomotor agitation or hyperactivity) which might reflect mania or delirium. An inability to sit still might represent akathisia, a side effect of antipsychotic medication. Similarly, a global decrease in arousal and movement (described as psychomotor retardation, akinesia or stupor) might indicate depression or a medical condition such as Parkinson's disease, dementia or delirium. The examiner would also comment on eye movements (repeatedly glancing to one side can suggest that the patient is experiencing hallucinations), and the quality of eye contact (which can provide clues to the patient's emotional state). Lack of eye contact may suggest depression or autism.
Mood and affect
The distinction between mood and affect in the MSE is subject to some disagreement. For example, Trzepacz and Baker (1993) describe affect as "the external and dynamic manifestations of a person's internal emotional state" and mood as "a person's predominant internal state at any one time", whereas Sims (1995) refers to affect as "differentiated specific feelings" and mood as "a more prolonged state or disposition". This article will use the Trzepacz and Baker (1993) definitions, with mood regarded as a current subjective state as described by the patient, and affect as the examiner's inferences of the quality of the patient's emotional state based on objective observation.
Mood is described using the patient's own words, and can also be described in summary terms such as neutral, euthymic, dysphoric, euphoric, angry, anxious or apathetic. Alexithymic individuals may be unable to describe their subjective mood state. An individual who is unable to experience any pleasure may have anhedonia.
Affect is described by labelling the apparent emotion conveyed by the person's nonverbal behavior (anxious, sad etc.), and also by using the parameters of appropriateness, intensity, range, reactivity and mobility. Affect may be described as appropriate or inappropriate to the current situation, and as congruent or incongruent with their thought content. For example, someone who shows a bland affect when describing a very distressing experience would be described as showing incongruent affect, which might suggest schizophrenia. The intensity of the affect may be described as normal, blunted affect, exaggerated, flat, heightened or overly dramatic. A flat or blunted affect is associated with schizophrenia, depression or post-traumatic stress disorder; heightened affect might suggest mania, and an overly dramatic or exaggerated affect might suggest certain personality disorders. Mobility refers to the extent to which affect changes during the interview: the affect may be described as fixed, mobile, immobile, constricted/restricted or labile. The person may show a full range of affect, in other words a wide range of emotional expression during the assessment, or may be described as having restricted affect. The affect may also be described as reactive, in other words changing flexibly and appropriately with the flow of conversation, or as unreactive. A bland lack of concern for one's disability may be described as showing la belle indifférence, a feature of conversion disorder, which is historically termed "hysteria" in older texts.
Speech
Speech is assessed by observing the patient's spontaneous speech, and also by using structured tests of specific language functions.
This heading is concerned with the production of speech rather than the content of speech, which is addressed under thought process and thought content (see below).
When observing the patient's spontaneous speech, the interviewer will note and comment on paralinguistic features such as the loudness, rhythm, prosody, intonation, pitch, phonation, articulation, quantity, rate, spontaneity and latency of speech. Many acoustic features have been shown to be significantly altered in mental health disorders.
A structured assessment of speech includes an assessment of expressive language by asking the patient to name objects, repeat short sentences, or produce as many words as possible from a certain category in a set time. Simple language tests also form part of the mini-mental state examination. In practice, the structured assessment of receptive and expressive language is often reported under Cognition (see below).
Language assessment will allow the recognition of medical conditions presenting with aphonia or dysarthria, neurological conditions such as stroke or dementia presenting with aphasia, and specific language disorders such as stuttering, cluttering or mutism. People with autism spectrum disorders may have abnormalities in paralinguistic and pragmatic aspects of their speech. Echolalia (repetition of another person's words) and palilalia (repetition of the subject's own words) can be heard with patients with autism, schizophrenia or Alzheimer's disease. A person with schizophrenia might use neologisms, which are made-up words which have a specific meaning to the person using them.
Speech assessment also contributes to assessment of mood, for example people with mania or anxiety may have rapid, loud and pressured speech; on the other hand depressed patients will typically have a prolonged speech latency and speak in a slow, quiet and hesitant manner.
Thought process
Thought process in the MSE refers to the quantity, tempo (rate of flow) and form (or logical coherence) of thought. Thought process cannot be directly observed but can only be described by the patient, or inferred from a patient's speech. Form of the thought is captured in this category. One should describe the thought form as thought directed A→B (normal), versus formal thought disorders. A pattern of interruption or disorganization of thought processes is broadly referred to as formal thought disorder, and might be described more specifically as thought blocking, fusion, loosening of associations, tangential thinking, derailment of thought, knight's move thinking. Thought may be described as 'circumstantial' when a patient includes a great deal of irrelevant detail and makes frequent diversions, but remains focused on the broad topic. Circumstantial thinking might be observed in anxiety disorders or certain kinds of personality disorders. Regarding the tempo of thought, some people may experience 'flight of ideas' (a manic symptom), when their thoughts are so rapid that their speech seems incoherent, although in flight of ideas a careful observer can discern a chain of poetic, syllabic, rhyming associations in the patient's speech (i.e., "I love to eat peaches, beach beaches, sand castles fall in the waves, braves are going to the finals, fee fi fo fum. Golden egg."). Alternatively an individual may be described as having retarded or inhibited thinking, in which thoughts seem to progress slowly with few associations. Poverty of thought is a global reduction in the quantity of thought and one of the negative symptoms of schizophrenia. It can also be a feature of severe depression or dementia. A patient with dementia might also experience thought perseveration. Thought perseveration refers to a pattern where a person keeps returning to the same limited set of ideas.
Thought content
A description of thought content would be the largest section of the MSE report. It would describe a patient's suicidal thoughts, depressed cognition, delusions, overvalued ideas, obsessions, phobias and preoccupations. One should separate the thought content into pathological thought, versus non-pathological thought. Importantly one should specify suicidal thoughts as either intrusive, unwanted, and not able to translate in the capacity to act on these thoughts (mens rea), versus suicidal thoughts that may lead to the act of suicide (actus reus).
Abnormalities of thought content are established by exploring individuals' thoughts in an open-ended conversational manner with regard to their intensity, salience, the emotions associated with the thoughts, the extent to which the thoughts are experienced as one's own and under one's control, and the degree of belief or conviction associated with the thoughts.
Delusions
A delusion has three essential qualities: it can be defined as "a false, unshakeable idea or belief (1) which is out of keeping with the patient's educational, cultural and social background (2) ... held with extraordinary conviction and subjective certainty (3)", and is a core feature of psychotic disorders. For instance an alliance to a particular political party, or sports team would not be considered a delusion in some societies.
The patient's delusions may be described within the SEGUE PM mnemonic as: somatic, erotomanic delusions, grandiose delusions, unspecified delusions, envious delusions (c.f. delusional jealousy), persecutory or paranoid delusions, or multifactorial delusions. There are several other forms of delusions, these include descriptions such as: delusions of reference, or delusional misidentification, or delusional memories (e.g., "I was a goat last year") among others.
Delusional symptoms can be reported as on a continuum from: full symptoms (with no insight), partial symptoms (where they may start questioning these delusions), nil symptoms (where symptoms are resolved), or after complete treatment there are still delusional symptoms or ideas that could develop into delusions you can characterize this as residual symptoms.
Delusions can suggest several diseases such as schizophrenia, schizophreniform disorder, brief psychotic disorder, mania, depression with psychotic features, or delusional disorders. One can differentiate delusional disorders from schizophrenia for example by the age of onset for delusional disorders being older with a more complete and unaffected personality, where the delusion may only partially impact their life and be fairly encapsulated off from the rest of their formed personalityfor example, believing that a spider lives in their hair, but this belief not affecting their work, relationships, or education. Whereas schizophrenia typically arises earlier in life with a disintegration of personality and a failure to cope with work, relationships, or education.
Other features differentiate diseases with delusions as well. Delusions may be described as mood-congruent (the delusional content in keeping with the mood), typical of manic or depressive psychosis, or mood-incongruent (delusional content not in keeping with the mood) which are more typical of schizophrenia. Delusions of control, or passivity experiences (in which the individual has the experience of the mind or body being under the influence or control of some kind of external force or agency), are typical of schizophrenia. Examples of this include experiences of thought withdrawal, thought insertion, thought broadcasting, and somatic passivity. Schneiderian first rank symptoms are a set of delusions and hallucinations which have been said to be highly suggestive of a diagnosis of schizophrenia. Delusions of guilt, delusions of poverty, and nihilistic delusions (belief that one has no mind or is already dead) are typical of depressive psychosis.
Overvalued Ideas
An overvalued idea is an emotionally charged belief that may be held with sufficient conviction to make believer emotionally charged or aggressive but that fails to possess all three characteristics of delusion—most importantly, incongruity with cultural norms. Therefore, any strong, fixed, false, but culturally normative belief can be considered an "overvalued idea". Hypochondriasis is an overvalued idea that one has an illness, dysmorphophobia that a part of one's body is abnormal, and anorexia nervosa that one is overweight or fat.
Obsessions
An obsession is an "undesired, unpleasant, intrusive thought that cannot be suppressed through the patient's volition", but unlike passivity experiences described above, they are not experienced as imposed from outside the patient's mind. Obsessions are typically intrusive thoughts of violence, injury, dirt or sex, or obsessive ruminations on intellectual themes. A person can also describe obsessional doubt, with intrusive worries about whether they have made the wrong decision, or forgotten to do something, for example turn off the gas or lock the house. In obsessive-compulsive disorder, the individual experiences obsessions with or without compulsions (a sense of having to carry out certain ritualized and senseless actions against their wishes).
Phobias
A phobia is "a dread of an object or situation that does not in reality pose any threat", and is distinct from a delusion in that the patient is aware that the fear is irrational. A phobia is usually highly specific to certain situations and will usually be reported by the patient rather than being observed by the clinician in the assessment interview.
Preoccupations
Preoccupations are thoughts which are not fixed, false or intrusive, but have an undue prominence in the person's mind. Clinically significant preoccupations would include thoughts of suicide, homicidal thoughts, suspicious or fearful beliefs associated with certain personality disorders, depressive beliefs (for example that one is unloved or a failure), or the cognitive distortions of anxiety and depression.
Suicidal thoughts
The MSE contributes to clinical risk assessment by including a thorough exploration of any suicidal or hostile thought content. Assessment of suicide risk includes detailed questioning about the nature of the person's suicidal thoughts, belief about death, reasons for living, and whether the person has made any specific plans to end his or her life. The most important questions to ask are: Do you have suicidal feeling now; have you ever attempted suicide (highly correlated with future suicide attempts); do you have plans to commit suicide in the future; and, do you have any deadlines where you may commit suicide (e.g., numerology calculation, doomsday belief, Mother's Day, anniversary, Christmas).
Perceptions
A perception in this context is any sensory experience, and the three broad types of perceptual disturbance are hallucinations, pseudohallucinations and illusions. A hallucination is defined as a sensory perception in the absence of any external stimulus, and is experienced in external or objective space (i.e. experienced by the subject as real). An illusion is defined as a false sensory perception in the presence of an external stimulus, in other words a distortion of a sensory experience, and may be recognized as such by the subject. A pseudohallucination is experienced in internal or subjective space (for example as "voices in my head") and is regarded as akin to fantasy.
Other sensory abnormalities include a distortion of the patient's sense of time, for example déjà vu, or a distortion of the sense of self (depersonalization) or sense of reality (derealization).
Hallucinations can occur in any of the five senses, although auditory and visual hallucinations are encountered more frequently than tactile (touch), olfactory (smell) or gustatory (taste) hallucinations.
Auditory hallucinations are typical of psychoses: third-person hallucinations (i.e. voices talking about the patient) and hearing one's thoughts spoken aloud (gedankenlautwerden or écho de la pensée) are among the Schneiderian first rank symptoms indicative of schizophrenia, whereas second-person hallucinations (voices talking to the patient) threatening or insulting or telling them to commit suicide, may be a feature of psychotic depression or schizophrenia. Visual hallucinations are generally suggestive of organic conditions such as epilepsy, drug intoxication or drug withdrawal. Many of the visual effects of hallucinogenic drugs are more correctly described as visual illusions or visual pseudohallucinations, as they are distortions of sensory experiences, and are not experienced as existing in objective reality. Auditory pseudohallucinations are suggestive of dissociative disorders. Déjà vu, derealization and depersonalization are associated with temporal lobe epilepsy and dissociative disorders.
Cognition
This section of the MSE covers the patient's level of alertness, orientation, attention, memory, visuospatial functioning, language functions and executive functions. Unlike other sections of the MSE, use is made of structured tests in addition to unstructured observation.
Alertness is a global observation of level of consciousness, i.e. awareness of and responsiveness to the environment, and this might be described as alert, clouded, drowsy, or stuporous. Orientation is assessed by asking the patient where he or she is (for example what building, town and state) and what time it is (time, day, date).
Attention and concentration are assessed by several tests, commonly serial sevens test subtracting 7 from 100 and subtracting 7 from the difference 5 times. Alternatively: spelling a five-letter word backwards, saying the months or days of the week in reverse order, serial threes (subtract three from twenty five times), and by testing digit span. Memory is assessed in terms of immediate registration (repeating a set of words), short-term memory (recalling the set of words after an interval, or recalling a short paragraph), and long-term memory (recollection of well known historical or geographical facts). Visuospatial functioning can be assessed by the ability to copy a diagram, draw a clock face, or draw a map of the consulting room. Language is assessed through the ability to name objects, repeat phrases, and by observing the individual's spontaneous speech and response to instructions. Executive functioning can be screened for by asking the "similarities" questions ("what do x and y have in common?") and by means of a verbal fluency task (e.g. "list as many words as you can starting with the letter F, in one minute"). The mini-mental state examination is a simple structured cognitive assessment which is in widespread use as a component of the MSE.
Mild impairment of attention and concentration may occur in any mental illness where people are anxious and distractible (including psychotic states), but more extensive cognitive abnormalities are likely to indicate a gross disturbance of brain functioning such as delirium, dementia or intoxication. Specific language abnormalities may be associated with pathology in Wernicke's area or Broca's area of the brain. In Korsakoff's syndrome there is dramatic memory impairment with relative preservation of other cognitive functions.
Visuospatial or constructional abnormalities here may be associated with parietal lobe pathology, and abnormalities in executive functioning tests may indicate frontal lobe pathology. This kind of brief cognitive testing is regarded as a screening process only, and any abnormalities are more carefully assessed using formal neuropsychological testing.
The MSE may include a brief neuropsychiatric examination in some situations. Frontal lobe pathology is suggested if the person cannot repetitively execute a motor sequence (e.g. "paper-scissors-rock").
The posterior columns are assessed by the person's ability to feel the vibrations of a tuning fork on the wrists and ankles.
The parietal lobe can be assessed by the person's ability to identify objects by touch alone and with eyes closed.
A cerebellar disorder may be present if the person cannot stand with arms extended, feet touching and eyes closed without swaying (Romberg's sign); if there is a tremor when the person reaches for an object; or if he or she is unable to touch a fixed point, close the eyes and touch the same point again.
Pathology in the basal ganglia may be indicated by rigidity and resistance to movement of the limbs, and by the presence of characteristic involuntary movements.
A lesion in the posterior fossa can be detected by asking the patient to roll his or her eyes upwards (Parinaud's syndrome).
Focal neurological signs such as these might reflect the effects of some prescribed psychiatric medications, chronic drug or alcohol use, head injuries, tumors or other brain disorders.
Insight
The person's understanding of his or her mental illness is evaluated by exploring his or her explanatory account of the problem, and understanding of the treatment options. In this context, insight can be said to have three components: recognition that one has a mental illness, compliance with treatment, and the ability to re-label unusual mental events (such as delusions and hallucinations) as pathological. As insight is on a continuum, the clinician should not describe it as simply present or absent, but should report the patient's explanatory account descriptively.
Impaired insight is characteristic of psychosis and dementia, and is an important consideration in treatment planning and in assessing the capacity to consent to treatment. Anosognosia is the clinical term for the condition in which the patient is unaware of their neurological deficit or psychiatric condition.
Judgment
Judgment refers to the patient's capacity to make sound, reasoned and responsible decisions. One should frame judgement to the functions or domains that are normal versus impaired (e.g., poor judgement is isolated to petty theft, able to function in relationships, work, academics).
Traditionally, the MSE included the use of standard hypothetical questions such as "what would you do if you found a stamped, addressed envelope lying in the street?"; however contemporary practice is to inquire about how the patient has responded or would respond to real-life challenges and contingencies. Assessment would take into account the individual's executive system capacity in terms of impulsiveness, social cognition, self-awareness and planning ability.
Impaired judgment is not specific to any diagnosis but may be a prominent feature of disorders affecting the frontal lobe of the brain. If a person's judgment is impaired due to mental illness, there might be implications for the person's safety or the safety of others.
Cultural considerations
There are potential problems when the MSE is applied in a cross-cultural context, when the clinician and patient are from different cultural backgrounds. For example, the patient's culture might have different norms for appearance, behavior and display of emotions. Culturally normative spiritual and religious beliefs need to be distinguished from delusions and hallucinations these may seem similar to one who does not understand that they have different roots. Cognitive assessment must also take the patient's language and educational background into account. Clinician's racial bias is another potential confounder. Consultation with cultural leaders in community or clinicians when working with Aboriginal people can help guide if any cultural phenomena has been considered when completing an MSE with Aboriginal patients and things to consider from a cross-cultural context.
Children
There are particular challenges in carrying out an MSE with young children and others with limited language such as people with intellectual impairment. The examiner would explore and clarify the individual's use of words to describe mood, thought content or perceptions, as words may be used idiosyncratically with a different meaning from that assumed by the examiner. In this group, tools such as play materials, puppets, art materials or diagrams (for instance with multiple choices of facial expressions depicting emotions) may be used to facilitate recall and explanation of experiences.
See also
Diagnostic classification and rating scales used in psychiatry
Diagnostic and Statistical Manual of Mental Disorders
DSM-IV Codes
Glossary of psychiatry
Self-administered Gerocognitive Examination (SAGE)
Footnotes
References
Adams, Yolonda, et al. (2010) Principles of Practice in Mental Health Assessment with Aboriginal Australians.
Further reading
External links
University of Utah Medical School: Video clips demonstrating cognitive assessment
Principles of Practice in Mental Health Assessment with Aboriginal Australians
Psychiatric assessment
Clinical psychology
Medical diagnosis
Medical mnemonics | Mental status examination | Biology | 6,131 |
20,023,434 | https://en.wikipedia.org/wiki/List%20of%20VAX%20computers | Between 1977 and 2000, Digital Equipment Corporation (DEC) produced a wide range of computer systems under the VAX brand, all based on various implementations of the DEC-proprietary instruction set architecture of the same name.
VAX-11
VAX-11/780
VAX-11/750
VAX-11/751
VAX-11/730
VAX-11/782
VAX-11/784
VAX-11/785
VAX-11/787
VAX-11/788
VAX-11/725
VAX 8000
VAX 8600
VAX 8650
VAX 8x00
VAX 8500
VAX 8530
VAX 8550
VAX 8700
VAX 8800
VAX 8810/8820/8830/8840
VAX 8974
VAX 8978
MicroVAX
MicroVAX I
MicroVAX II
Industrial VAX 630
MicroVAX III
MicroVAX III+
VAX 4
MicroVAX 2000
MicroVAX 3100 Model 10
MicroVAX 3100 Model 10e
MicroVAX 3100 Model 20 (also sold with different firmware as an Infoserver 100)
MicroVAX 3100 Model 20e
MicroVAX 3100 Model 30
MicroVAX 3100 Model 40
MicroVAX 3100 Model 80
MicroVAX 3100 Model 85
MicroVAX 3100 Model 88
MicroVAX 3100 Model 90
MicroVAX 3100 Model 95
MicroVAX 3100 Model 96
MicroVAX 3100 Model 98
MicroVAX 3300
MicroVAX 3400
MicroVAX 3500
MicroVAX 3600
MicroVAX 3800
MicroVAX 3900
VAXserver
VAXserver 3000
VAXserver 3100
VAXserver 3300
VAXserver 3400
VAXserver 3500
VAXserver 3600
VAXserver 3602
VAXserver 3800
VAXserver 3900
VAXserver 4000 Model 200
VAXserver 4000 Model 300
VAXserver 6000 Model 210
VAXserver 6000 Model 220
VAXserver 6000 Model 310
VAXserver 6000 Model 320
VAXserver 6000 Model 410
VAXserver 6000 Model 420
VAXserver 6000 Model 510
VAXserver 6000 Model 520
VAXserver 9000 Model 110
VAXserver 9000 Model 3x0
VAXserver 9000 Model 310/Model 310VP
VAXserver 9000 Model 320/Model 320VP
VAXserver 9000 Model 330/Model 330VP
VAXserver 9000 Model 340/Model 340VP
VAXstation
VAXstation I
VAXstation II
VAXstation II/GPX
VAXstation 2000
VAXstation 3100 Model 30
VAXstation 3100 Model 38
VAXstation 3100 Model 40
VAXstation 3100 Model 48
VAXstation 3100 Model 76
VAXstation 3200
VAXstation 3500
VAXstation 3520
VAXstation 3540
VAXstation 4000 Model 30 (VAXstation 4000 VLC)
VAXstation 4000 Model 60
VAXstation 4000 Model 90
VAXstation 4000 Model 90A
VAXstation 4000 Model 96
VAXstation 8000
VT1300
VXT 2000
VAX 6000
VAX 6000 Model 2x0 (also known as the VAX 62x0)
VAX 6000 Model 3x0 (also known as the VAX 63x0)
VAX 6333
VAX 6000 Model 4x0
VAX 6000 Model 5x0
VAX 6000 Model 6x0
VAX 4000
VAX 4000 Model 50
VAX 4000 Model 100
VAX 4000 Model 100A
VAX 4000 Model 105A
VAX 4000 Model 106A
VAX 4000 Model 108
VAX 4000 Model 200
VAX 4000 Model 300
VAX 4000 Model 400
VAX 4000 Model 500
VAX 4000 Model 500A
VAX 4000 Model 505A
VAX 4000 Model 600
VAX 4000 Model 600A
VAX 4000 Model 700A
VAX 4000 Model 705A
VAX 9000
VAX 9000 Model 110
VAX 9000 Model 210
VAX 9000 Model 210VP
VAX 9000 Model 310
VAX 9000 Model 410
VAX 9000 Model 420
VAX 9000 Model 430
VAX 9000 Model 440
VAXft
VAXft Model 310 (also known as the VAXft 3000 Model 310)
VAXft Model 110
VAXft Model 410
VAXft Model 610
VAXft Model 612
VAXft Model 810
VAXft 110 Server
VAXft 310 Server
VAXft 410 Server
VAXft 610 Server
VAX 7000
VAX 7000 Model 600
VAX 7000 Model 700
VAX 7000 Model 800
VAX 10000
VAX 10000 Model 600
References
DEC computers
Lists of computer hardware | List of VAX computers | Technology | 983 |
1,068,102 | https://en.wikipedia.org/wiki/Prior-appropriation%20water%20rights | In the American legal system, prior appropriation water rights is the doctrine that the first person to take a quantity of water from a water source for "beneficial use" (agricultural, industrial or household) has the right to continue to use that quantity of water for that purpose. Subsequent users can take the remaining water for their own use if they do not impinge on the rights of previous users. The doctrine is sometimes summarized, "first in time, first in right".
Prior appropriation rights do not constitute a full ownership right in the water, merely the right to withdraw it, and can be abrogated if not used for an extended period of time.
Origin
Water is very scarce in the West and so must be allocated sparingly, based on the productivity of its use. The prior appropriation doctrine developed in the Western United States from Spanish (and later Mexican) civil law and differs from the riparian water rights that apply in the rest of the United States. The appropriation doctrine originated in Gold-Rush–era California, when miners sought to acquire water for mining operations. In the 1855 case of Irwin v. Phillips, Matthew Irwin diverted a stream for his mining operation. Shortly afterward, Robert Phillips started a mining operation downstream and eventually tried to divert the water back to its original streambed. The case was taken to the California Supreme Court, which ruled for Irwin.
Nature of the right
The legal details of prior appropriation vary from state to state. Under the prior appropriation system, the right is initially allotted to those who are "first in time of use"; these rights of withdrawal can then trade on the open market, like other property. For water sources with many users, a government or quasi-government agency is usually charged with overseeing allocations. Allocations involving water sources that cross state borders or international borders can be quite contentious, and are generally governed by federal court rulings, interstate agreements and international treaties.
A claim of prior appropriation must prove four sub-claims: diversion (that the water had been withdrawn), priority (that the withdrawer had diverted water prior to the other claimant), intent (that the water had been withdrawn by design), and beneficial use (that the water was put to a publicly-acceptable end). If proved, the initial person to use a quantity of water from a water source for a beneficial use has the right to continue to use the same quantity of water for the same purpose. Subsequent users can use the remaining water for their own beneficial purposes provided that they do not impinge on the rights of previous users; this is the priority element of the doctrine. But neither can a senior user change the manner (i.e., location) in which they appropriate water to the detriment of a junior user. These Preservation of Conditions were granted to the second user after Farmers Highline Canal & Reservoir Co. v. City of Golden, 272 P.2d 629 (Colo. 1954). A senior water user could, for example, only have been using the water during a particular season. Then the purchaser of the water right could only use the water in the same season as when the right was established. In addition, the state may put additional conditions on the use of the water right to prevent polluting or inefficient uses of water.
Beneficial use is commonly defined as agricultural, industrial or household use. The doctrine has historically excluded ecological purposes, such as maintaining a natural body of water and the wildlife that depends on it, but some jurisdictions now accept such claims. The extent to which private parties may own such rights varies among the states.
Each water right has a yearly quantity and an appropriation date. Each year, the user with the earliest appropriation date (known as the "senior appropriator") may use up to their full allocation (provided the water source can supply it). Then the user with the next earliest appropriation date may use their full allocation and so on. In cases of water shortages, prior-appropriation does not require a senior user to utilize less water than usual. Therefore, during times of drought, users with junior appropriation dates might not receive their full allocation or even any water at all.
When a water right is sold, it retains its original appropriation date. Only the amount of water historically consumed can be transferred if a water right is sold. For example, if alfalfa is grown using flood irrigation, the amount of the return flow may not be transferred, only the amount that would be necessary to irrigate the amount of alfalfa historically grown.
Prior appropriation rights are subject to certain adverse possession-type rules to reduce speculation. Withdrawal rights can be lost or shrunk over time if unused for a certain number of years, or if a litigant can demonstrate that the water's use is not beneficial. Abandonment of a water right is rare, but occurred in Colorado in a case involving the South Fork of San Isabel Creek in Saguache County.
Interaction with other allocation methods
In some states, junior upstream water users may take water from downstream users, as long as they return the water in comparable quantity and quality.
California and Texas grant waterfront property owners water allocations prior to any other users, in a hybrid system with riparian water rights. In Oregon, landowners have rights to water on their own land at a certain time at which it is then incorporated into the appropriation system.
Adoption
Alaska, Arizona, California, Colorado, Hawaii, Idaho, Kansas, Montana, Nebraska, Nevada, New Mexico, North Dakota, Oklahoma, Oregon, South Dakota, Texas, Utah, Washington, Wyoming all use the prior appropriation doctrine, with permitting and reporting as their regulatory system. Of these, California, Texas, and Oregon recognize a dual doctrine system that employs both riparian and prior appropriation rights (see ). Eight states (Arizona, Colorado, Idaho, Montana, Nevada, New Mexico, Utah, and Wyoming) engage in prior appropriation without recognizing the riparian doctrine.
However, prior appropriation does not always determine water allocation in these states because various federal regulations also have priority over senior users. For example, the Endangered Species Act of 1973 seeks to protect animals at risk of extinction, so a senior user's rights may be restricted in favor of federal regulation protecting the habitats of endangered animals. These federal rules manifest as a prior allotment by the Secretary of the Interior.
Arizona
Arizona adopted the prior appropriation doctrine such that a person could acquire this water right simply by applying it to beneficial use and posting an appropriation notice at the point of diversion. On June 12, 1919, they enacted the Public Water Code in which the person must apply for and obtain a permit for water use.
Colorado
The appropriation doctrine was adopted in Colorado in 1872 when the territorial court ruled in Yunker v. Nichols, 1 Colo. 552 (1872), that a non-riparian user who had previously applied part of the water from a stream to beneficial use had superior rights to the water with respect to a riparian owner who claimed a right to use of all the water at a later time. The question was not squarely presented again to the Colorado Court until 1882 when in the landmark case, Coffin v. Left Hand Ditch Co., 6 Colo. 443 (1882), the court explicitly adopted the appropriation doctrine and rejected the riparian doctrine, citing Colorado irrigation and mining practices and the nature of the climate. The decision in Coffin ruled that prior to adoption of the appropriation doctrine in the Colorado Constitution of 1876 that the riparian doctrine had never been the law in Colorado. Within 20 years the appropriation doctrine, the so-called Colorado Doctrine, had been adopted, in whole or part, by most of the states in the Western United States that had an arid climate.
New Mexico
New Mexico enacted its appropriate Surface-Water Code in 1907. Later, in 1931, New Mexico enacted the Underground Water Law that adapted the state's surface law to ground water.
Montana
The prior-appropriation doctrine was adopted in 1973 in Montana under the 1973 Water Use Act. Later, they then passed the Montana Ground Water Assessment Act in 1991.
Texas
In 1967, Texas passed the Water Rights Adjudication Act in regards to surface waters such that the allocation of these waters was under a unified permit system.
Criticism
Even though water markets increasingly gain ground, many criticize the prior appropriation system for failing to adequately adjust to society's evolving values and needs. Environmentalists and recreational river-users demand more water be left in rivers and streams, but courts have been slow to accept these requests as beneficial uses. Conversely, the tool of beneficial use is too tied to custom to encourage users to conserve. An appropriator who uses water inefficiently retains the right to the full allotment, but an appropriator who uses only a portion risks losing the right to the rest, and water right markets remain too illiquid to purchase any excess. As a result, the vast majority of water in the West still is allocated to agricultural uses despite cries for additional water from growing cities.
High demand can cause an over-appropriation of the waters, in which there are more water rights for a particular stream than water actually available. This leads to an apparent inefficiency: if a water source is over-appropriated, the latest users will almost never see water from their claims. But without those claims, excess water from an unusually wet year will go to waste.
In other goods
Water is not the only public good that has been subject to prior appropriation. The same first in time, first in right theory has been used in the United States to encourage and give a legal framework for other commercial activities.
The early prospectors and miners in the California Gold Rush of 1849, and later gold and silver rushes in the western United States, applied appropriation theory to mineral deposits. The first one to discover and begin mining a deposit was acknowledged to have a legal right to mine. Because appropriation theory in mineral lands and water rights developed in the same time and place, it is likely that they influenced one another. As with water rights, mining rights could be forfeited by nonuse. The miners codes were later legalized by the federal government in 1866, and then in the Mining Law of 1872.
The Homestead Act of 1862 granted legal title to the first farmer to put public land into agricultural production. This first in time right to agricultural land may have been influenced by appropriation theory applied to mineral lands.
In recent years, there has been some discussion of limiting air pollution by granting rights to existing pollution sources. Then it has been argued, a free cap and trade market could develop in pollution rights. This would be prior appropriation theory applied to air pollution. Recent concern over carbon dioxide and global warming has led to an economic market in CO2 emissions, in which some companies wish to balance emissions increases by offsetting decreases in existing emissions sources. This is essentially acknowledging a prior appropriation right to existing CO2 emitters.
See also
Air rights
Countryside and Rights of Way Act 2000 (in the UK)
Crown land (see "Logging and mineral rights" under Canada)
Easement ("the right of use over the real property of another")
Land rights
Right of public access to the wilderness
Riparian water rights
United States groundwater law
Civil law (legal system)
References
External links
Western States Water Laws BLM site.
Law and economics
Environmental economics
Water law in the United States | Prior-appropriation water rights | Environmental_science | 2,336 |
6,097,297 | https://en.wikipedia.org/wiki/Linux | Linux (, ) is a family of open-source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991, by Linus Torvalds. Linux is typically packaged as a Linux distribution (distro), which includes the kernel and supporting system software and libraries—most of which are provided by third parties—to create a complete operating system, designed as a clone of Unix and released under the copyleft GPL license.
Thousands of Linux distributions exist, many based directly or indirectly on other distributions; popular Linux distributions include Debian, Fedora Linux, Linux Mint, Arch Linux, and Ubuntu, while commercial distributions include Red Hat Enterprise Linux, SUSE Linux Enterprise, and ChromeOS. Linux distributions are frequently used in server platforms. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses and recommends the name "GNU/Linux" to emphasize the use and importance of GNU software in many distributions, causing some controversy. Other than the Linux kernel, key components that make up a distribution may include a display server (windowing system), a package manager, a bootloader and a Unix shell.
Linux is one of the most prominent examples of free and open-source software collaboration. While originally developed for x86 based personal computers, it has since been ported to more platforms than any other operating system, and is used on a wide variety of devices including PCs, workstations, mainframes and embedded systems. Linux is the predominant operating system for servers and is also used on all of the world's 500 fastest supercomputers. When combined with Android, which is Linux-based and designed for smartphones, they have the largest installed base of all general-purpose operating systems.
Overview
The Linux kernel was designed by Linus Torvalds, following the lack of a working kernel for GNU, a Unix-compatible operating system made entirely of free software that had been undergoing development since 1983 by Richard Stallman. A working Unix system called Minix was later released but its license was not entirely free at the time and it was made for an educative purpose. The first entirely free Unix for personal computers, 386BSD, did not appear until 1992, by which time Torvalds had already built and publicly released the first version of the Linux kernel on the Internet. Like GNU and 386BSD, Linux did not have any Unix code, being a fresh reimplementation, and therefore avoided the then legal issues. Linux distributions became popular in the 1990s and effectively made Unix technologies accessible to home users on personal computers whereas previously it had been confined to sophisticated workstations.
Desktop Linux distributions include a windowing system such as X11 or Wayland and a desktop environment such as GNOME, KDE Plasma or Xfce. Distributions intended for servers may not have a graphical user interface at all or include a solution stack such as LAMP.
The source code of Linux may be used, modified, and distributed commercially or non-commercially by anyone under the terms of its respective licenses, such as the GNU General Public License (GPL). The license means creating novel distributions is permitted by anyone and is easier than it would be for an operating system such as MacOS or Microsoft Windows. The Linux kernel, for example, is licensed under the GPLv2, with an exception for system calls that allows code that calls the kernel via system calls not to be licensed under the GPL.
Because of the dominance of Linux-based Android on smartphones, Linux, including Android, has the largest installed base of all general-purpose operating systems . Linux is, , used by around 4 percent of desktop computers. The Chromebook, which runs the Linux kernel-based ChromeOS, dominates the US K–12 education market and represents nearly 20 percent of sub-$300 notebook sales in the US. Linux is the leading operating system on servers (over 96.4% of the top one million web servers' operating systems are Linux), leads other big iron systems such as mainframe computers, and is used on all of the world's 500 fastest supercomputers (, having gradually displaced all competitors).
Linux also runs on embedded systems, i.e., devices whose operating system is typically built into the firmware and is highly tailored to the system. This includes routers, automation controls, smart home devices, video game consoles, televisions (Samsung and LG smart TVs), automobiles (Tesla, Audi, Mercedes-Benz, Hyundai, and Toyota), and spacecraft (Falcon 9 rocket, Dragon crew capsule, and the Ingenuity Mars helicopter).
History
Precursors
The Unix operating system was conceived of and implemented in 1969, at AT&T's Bell Labs in the United States, by Ken Thompson, Dennis Ritchie, Douglas McIlroy, and Joe Ossanna. First released in 1971, Unix was written entirely in assembly language, as was common practice at the time. In 1973, in a key pioneering approach, it was rewritten in the C programming language by Dennis Ritchie (except for some hardware and I/O routines). The availability of a high-level language implementation of Unix made its porting to different computer platforms easier.
As a 1956 antitrust case forbade AT&T from entering the computer business, AT&T provided the operating system's source code to anyone who asked. As a result, Unix use grew quickly and it became widely adopted by academic institutions and businesses. In 1984, AT&T divested itself of its regional operating companies, and was released from its obligation not to enter the computer business; freed of that obligation, Bell Labs began selling Unix as a proprietary product, where users were not legally allowed to modify it.
Onyx Systems began selling early microcomputer-based Unix workstations in 1980. Later, Sun Microsystems, founded as a spin-off of a student project at Stanford University, also began selling Unix-based desktop workstations in 1982. While Sun workstations did not use commodity PC hardware, for which Linux was later originally developed, it represented the first successful commercial attempt at distributing a primarily single-user microcomputer that ran a Unix operating system.
With Unix increasingly "locked in" as a proprietary product, the GNU Project, started in 1983 by Richard Stallman, had the goal of creating a "complete Unix-compatible software system" composed entirely of free software. Work began in 1984. Later, in 1985, Stallman started the Free Software Foundation and wrote the GNU General Public License (GNU GPL) in 1989. By the early 1990s, many of the programs required in an operating system (such as libraries, compilers, text editors, a command-line shell, and a windowing system) were completed, although low-level elements such as device drivers, daemons, and the kernel, called GNU Hurd, were stalled and incomplete.
Minix was created by Andrew S. Tanenbaum, a computer science professor, and released in 1987 as a minimal Unix-like operating system targeted at students and others who wanted to learn operating system principles. Although the complete source code of Minix was freely available, the licensing terms prevented it from being free software until the licensing changed in April 2000.
Creation
While attending the University of Helsinki in the fall of 1990, Torvalds enrolled in a Unix course. The course used a MicroVAX minicomputer running Ultrix, and one of the required texts was Operating Systems: Design and Implementation by Andrew S. Tanenbaum. This textbook included a copy of Tanenbaum's Minix operating system. It was with this course that Torvalds first became exposed to Unix. In 1991, he became curious about operating systems. Frustrated by the licensing of Minix, which at the time limited it to educational use only, he began to work on his operating system kernel, which eventually became the Linux kernel.
On July 3, 1991, to implement Unix system calls, Linus Torvalds attempted unsuccessfully to obtain a digital copy of the POSIX standards documentation with a request to the comp.os.minix newsgroup. After not finding the POSIX documentation, Torvalds initially resorted to determining system calls from SunOS documentation owned by the university for use in operating its Sun Microsystems server. He also learned some system calls from Tanenbaum's Minix text.
Torvalds began the development of the Linux kernel on Minix and applications written for Minix were also used on Linux. Later, Linux matured and further Linux kernel development took place on Linux systems. GNU applications also replaced all Minix components, because it was advantageous to use the freely available code from the GNU Project with the fledgling operating system; code licensed under the GNU GPL can be reused in other computer programs as long as they also are released under the same or a compatible license. Torvalds initiated a switch from his original license, which prohibited commercial redistribution, to the GNU GPL. Developers worked to integrate GNU components with the Linux kernel, creating a fully functional and free operating system.
Although not released until 1992, due to legal complications, the development of 386BSD, from which NetBSD, OpenBSD and FreeBSD descended, predated that of Linux. Linus Torvalds has stated that if the GNU kernel or 386BSD had been available in 1991, he probably would not have created Linux.
Naming
Linus Torvalds had wanted to call his invention "Freax", a portmanteau of "free", "freak", and "x" (as an allusion to Unix). During the start of his work on the system, some of the project's makefiles included the name "Freax" for about half a year. Torvalds considered the name "Linux" but dismissed it as too egotistical.
To facilitate development, the files were uploaded to the FTP server of FUNET in September 1991. Ari Lemmke, Torvalds' coworker at the Helsinki University of Technology (HUT) who was one of the volunteer administrators for the FTP server at the time, did not think that "Freax" was a good name, so he named the project "Linux" on the server without consulting Torvalds. Later, however, Torvalds consented to "Linux".
According to a newsgroup post by Torvalds, the word "Linux" should be pronounced ( ) with a short 'i' as in 'print' and 'u' as in 'put'. To further demonstrate how the word "Linux" should be pronounced, he included an audio guide with the kernel source code. However, in this recording, he pronounces Linux as () with a short but close front unrounded vowel, instead of a near-close near-front unrounded vowel as in his newsgroup post.
Commercial and popular uptake
The adoption of Linux in production environments, rather than being used only by hobbyists, started to take off first in the mid-1990s in the supercomputing community, where organizations such as NASA started to replace their increasingly expensive machines with clusters of inexpensive commodity computers running Linux. Commercial use began when Dell and IBM, followed by Hewlett-Packard, started offering Linux support to escape Microsoft's monopoly in the desktop operating system market.
Today, Linux systems are used throughout computing, from embedded systems to virtually all supercomputers, and have secured a place in server installations such as the popular LAMP application stack. The use of Linux distributions in home and enterprise desktops has been growing.
Linux distributions have also become popular in the netbook market, with many devices shipping with customized Linux distributions installed, and Google releasing their own ChromeOS designed for netbooks.
Linux's greatest success in the consumer market is perhaps the mobile device market, with Android being the dominant operating system on smartphones and very popular on tablets and, more recently, on wearables, and vehicles. Linux gaming is also on the rise with Valve showing its support for Linux and rolling out SteamOS, its own gaming-oriented Linux distribution, which was later implemented in their Steam Deck platform. Linux distributions have also gained popularity with various local and national governments, such as the federal government of Brazil.
Development
Linus Torvalds is the lead maintainer for the Linux kernel and guides its development, while Greg Kroah-Hartman is the lead maintainer for the stable branch. Zoë Kooyman is the executive director of the Free Software Foundation, which in turn supports the GNU components. Finally, individuals and corporations develop third-party non-GNU components. These third-party components comprise a vast body of work and may include both kernel modules and user applications and libraries.
Linux vendors and communities combine and distribute the kernel, GNU components, and non-GNU components, with additional package management software in the form of Linux distributions.
Design
Many developers of open-source software agree that the Linux kernel was not designed but rather evolved through natural selection. Torvalds considers that although the design of Unix served as a scaffolding, "Linux grew with a lot of mutations – and because the mutations were less than random, they were faster and more directed than alpha-particles in DNA." Eric S. Raymond considers Linux's revolutionary aspects to be social, not technical: before Linux, complex software was designed carefully by small groups, but "Linux evolved in a completely different way. From nearly the beginning, it was rather casually hacked on by huge numbers of volunteers coordinating only through the Internet. Quality was maintained not by rigid standards or autocracy but by the naively simple strategy of releasing every week and getting feedback from hundreds of users within days, creating a sort of rapid Darwinian selection on the mutations introduced by developers." Bryan Cantrill, an engineer of a competing OS, agrees that "Linux wasn't designed, it evolved", but considers this to be a limitation, proposing that some features, especially those related to security, cannot be evolved into, "this is not a biological system at the end of the day, it's a software system."
A Linux-based system is a modular Unix-like operating system, deriving much of its basic design from principles established in Unix during the 1970s and 1980s. Such a system uses a monolithic kernel, the Linux kernel, which handles process control, networking, access to the peripherals, and file systems. Device drivers are either integrated directly with the kernel or added as modules that are loaded while the system is running.
The GNU userland is a key part of most systems based on the Linux kernel, with Android being the notable exception. The GNU C library, an implementation of the C standard library, works as a wrapper for the system calls of the Linux kernel necessary to the kernel-userspace interface, the toolchain is a broad collection of programming tools vital to Linux development (including the compilers used to build the Linux kernel itself), and the coreutils implement many basic Unix tools. The GNU Project also develops Bash, a popular CLI shell. The graphical user interface (or GUI) used by most Linux systems is built on top of an implementation of the X Window System. More recently, some of the Linux community has sought to move to using Wayland as the display server protocol, replacing X11.
Many other open-source software projects contribute to Linux systems.
Installed components of a Linux system include the following:
A bootloader, for example GNU GRUB, LILO, SYSLINUX or systemd-boot. This is a program that loads the Linux kernel into the computer's main memory, by being executed by the computer when it is turned on and after the firmware initialization is performed.
An init program, such as the traditional sysvinit and the newer systemd, OpenRC and Upstart. This is the first process launched by the Linux kernel, and is at the root of the process tree. It starts processes such as system services and login prompts (whether graphical or in terminal mode).
Software libraries, which contain code that can be used by running processes. On Linux systems using ELF-format executable files, the dynamic linker that manages the use of dynamic libraries is known as ld-linux.so. If the system is set up for the user to compile software themselves, header files will also be included to describe the programming interface of installed libraries. Besides the most commonly used software library on Linux systems, the GNU C Library (glibc), there are numerous other libraries, such as SDL and Mesa.
The C standard library is the library necessary to run programs written in C on a computer system, with the GNU C Library being the standard. It provides an implementation of the POSIX API, as well as extensions to that API. For embedded systems, alternatives such as musl, EGLIBC (a glibc fork once used by Debian) and uClibc (which was designed for uClinux) have been developed, although the last two are no longer maintained. Android uses its own C library, Bionic. However, musl can additionally be used as a replacement for glibc on desktop and laptop systems, as seen on certain Linux distributions like Void Linux.
Basic Unix commands, with GNU coreutils being the standard implementation. Alternatives exist for embedded systems, such as the copyleft BusyBox, and the BSD-licensed Toybox.
Widget toolkits are the libraries used to build graphical user interfaces (GUIs) for software applications. Numerous widget toolkits are available, including GTK and Clutter developed by the GNOME Project, Qt developed by the Qt Project and led by The Qt Company, and Enlightenment Foundation Libraries (EFL) developed primarily by the Enlightenment team.
A package management system, such as dpkg and RPM. Alternatively packages can be compiled from binary or source tarballs.
User interface programs such as command shells or windowing environments.
User interface
The user interface, also known as the shell, is either a command-line interface (CLI), a graphical user interface (GUI), or controls attached to the associated hardware, which is common for embedded systems. For desktop systems, the default user interface is usually graphical, although the CLI is commonly available through terminal emulator windows or on a separate virtual console.
CLI shells are text-based user interfaces, which use text for both input and output. The dominant shell used in Linux is the Bourne-Again Shell (bash), originally developed for the GNU Project; other shells such as Zsh are also used. Most low-level Linux components, including various parts of the userland, use the CLI exclusively. The CLI is particularly suited for automation of repetitive or delayed tasks and provides very simple inter-process communication.
On desktop systems, the most popular user interfaces are the GUI shells, packaged together with extensive desktop environments, such as KDE Plasma, GNOME, MATE, Cinnamon, LXDE, Pantheon, and Xfce, though a variety of additional user interfaces exist. Most popular user interfaces are based on the X Window System, often simply called "X" or "X11". It provides network transparency and permits a graphical application running on one system to be displayed on another where a user may interact with the application; however, certain extensions of the X Window System are not capable of working over the network. Several X display servers exist, with the reference implementation, X.Org Server, being the most popular.
Several types of window managers exist for X11, including tiling, dynamic, stacking, and compositing. Window managers provide means to control the placement and appearance of individual application windows, and interact with the X Window System. Simpler X window managers such as dwm, ratpoison, or i3wm provide a minimalist functionality, while more elaborate window managers such as FVWM, Enlightenment, or Window Maker provide more features such as a built-in taskbar and themes, but are still lightweight when compared to desktop environments. Desktop environments include window managers as part of their standard installations, such as Mutter (GNOME), KWin (KDE), or Xfwm (xfce), although users may choose to use a different window manager if preferred.
Wayland is a display server protocol intended as a replacement for the X11 protocol; , it has received relatively wide adoption. Unlike X11, Wayland does not need an external window manager and compositing manager. Therefore, a Wayland compositor takes the role of the display server, window manager, and compositing manager. Weston is the reference implementation of Wayland, while GNOME's Mutter and KDE's KWin are being ported to Wayland as standalone display servers. Enlightenment has already been successfully ported since version 19. Additionally, many window managers have been made for Wayland, such as Sway or Hyprland, as well as other graphical utilities such as Waybar or Rofi.
Video input infrastructure
Linux currently has two modern kernel-userspace APIs for handling video input devices: V4L2 API for video streams and radio, and DVB API for digital TV reception.
Due to the complexity and diversity of different devices, and due to the large number of formats and standards handled by those APIs, this infrastructure needs to evolve to better fit other devices. Also, a good userspace device library is the key to the success of having userspace applications to be able to work with all formats supported by those devices.
Development
The primary difference between Linux and many other popular contemporary operating systems is that the Linux kernel and other components are free and open-source software. Linux is not the only such operating system, although it is by far the most widely used. Some free and open-source software licenses are based on the principle of copyleft, a kind of reciprocity: any work derived from a copyleft piece of software must also be copyleft itself. The most common free software license, the GNU General Public License (GPL), is a form of copyleft and is used for the Linux kernel and many of the components from the GNU Project.
Linux-based distributions are intended by developers for interoperability with other operating systems and established computing standards. Linux systems adhere to POSIX, Single UNIX Specification (SUS), Linux Standard Base (LSB), ISO, and ANSI standards where possible, although to date only one Linux distribution has been POSIX.1 certified, Linux-FT.
Free software projects, although developed through collaboration, are often produced independently of each other. The fact that the software licenses explicitly permit redistribution, however, provides a basis for larger-scale projects that collect the software produced by stand-alone projects and make it available all at once in the form of a Linux distribution.
Many Linux distributions manage a remote collection of system software and application software packages available for download and installation through a network connection. This allows users to adapt the operating system to their specific needs. Distributions are maintained by individuals, loose-knit teams, volunteer organizations, and commercial entities. A distribution is responsible for the default configuration of the installed Linux kernel, general system security, and more generally integration of the different software packages into a coherent whole. Distributions typically use a package manager such as apt, yum, zypper, pacman or portage to install, remove, and update all of a system's software from one central location.
Community
A distribution is largely driven by its developer and user communities. Some vendors develop and fund their distributions on a volunteer basis, Debian being a well-known example. Others maintain a community version of their commercial distributions, as Red Hat does with Fedora, and SUSE does with openSUSE.
In many cities and regions, local associations known as Linux User Groups (LUGs) seek to promote their preferred distribution and by extension free software. They hold meetings and provide free demonstrations, training, technical support, and operating system installation to new users. Many Internet communities also provide support to Linux users and developers. Most distributions and free software / open-source projects have IRC chatrooms or newsgroups. Online forums are another means of support, with notable examples being Unix & Linux Stack Exchange, LinuxQuestions.org and the various distribution-specific support and community forums, such as ones for Ubuntu, Fedora, Arch Linux, Gentoo, etc. Linux distributions host mailing lists; commonly there will be a specific topic such as usage or development for a given list.
There are several technology websites with a Linux focus. Print magazines on Linux often bundle cover disks that carry software or even complete Linux distributions.
Although Linux distributions are generally available without charge, several large corporations sell, support, and contribute to the development of the components of the system and free software. An analysis of the Linux kernel in 2017 showed that well over 85% of the code was developed by programmers who are being paid for their work, leaving about 8.2% to unpaid developers and 4.1% unclassified. Some of the major corporations that provide contributions include Intel, Samsung, Google, AMD, Oracle, and Facebook. Several corporations, notably Red Hat, Canonical, and SUSE have built a significant business around Linux distributions.
The free software licenses, on which the various software packages of a distribution built on the Linux kernel are based, explicitly accommodate and encourage commercialization; the relationship between a Linux distribution as a whole and individual vendors may be seen as symbiotic. One common business model of commercial suppliers is charging for support, especially for business users. A number of companies also offer a specialized business version of their distribution, which adds proprietary support packages and tools to administer higher numbers of installations or to simplify administrative tasks.
Another business model is to give away the software to sell hardware. This used to be the norm in the computer industry, with operating systems such as CP/M, Apple DOS, and versions of the classic Mac OS before 7.6 freely copyable (but not modifiable). As computer hardware standardized throughout the 1980s, it became more difficult for hardware manufacturers to profit from this tactic, as the OS would run on any manufacturer's computer that shared the same architecture.
Programming on Linux
Most programming languages support Linux either directly or through third-party community based ports. The original development tools used for building both Linux applications and operating system programs are found within the GNU toolchain, which includes the GNU Compiler Collection (GCC) and the GNU Build System. Amongst others, GCC provides compilers for Ada, C, C++, Go and Fortran. Many programming languages have a cross-platform reference implementation that supports Linux, for example PHP, Perl, Ruby, Python, Java, Go, Rust and Haskell. First released in 2003, the LLVM project provides an alternative cross-platform open-source compiler for many languages. Proprietary compilers for Linux include the Intel C++ Compiler, Sun Studio, and IBM XL C/C++ Compiler. BASIC is available in procedural form from QB64, PureBasic, Yabasic, GLBasic, Basic4GL, XBasic, wxBasic, SdlBasic, and Basic-256, as well as object oriented through Gambas, FreeBASIC, B4X, Basic for Qt, Phoenix Object Basic, NS Basic, ProvideX, Chipmunk Basic, RapidQ and Xojo. Pascal is implemented through GNU Pascal, Free Pascal, and Virtual Pascal, as well as graphically via Lazarus, PascalABC.NET, or Delphi using FireMonkey (previously through Borland Kylix).
A common feature of Unix-like systems, Linux includes traditional specific-purpose programming languages targeted at scripting, text processing and system configuration and management in general. Linux distributions support shell scripts, awk, sed and make. Many programs also have an embedded programming language to support configuring or programming themselves. For example, regular expressions are supported in programs like grep and locate, the traditional Unix message transfer agent Sendmail contains its own Turing complete scripting system, and the advanced text editor GNU Emacs is built around a general purpose Lisp interpreter.
Most distributions also include support for PHP, Perl, Ruby, Python and other dynamic languages. While not as common, Linux also supports C# and other CLI languages (via Mono), Vala, and Scheme. Guile Scheme acts as an extension language targeting the GNU system utilities, seeking to make the conventionally small, static, compiled C programs of Unix design rapidly and dynamically extensible via an elegant, functional high-level scripting system; many GNU programs can be compiled with optional Guile bindings to this end. A number of Java virtual machines and development kits run on Linux, including the original Sun Microsystems JVM (HotSpot), and IBM's J2SE RE, as well as many open-source projects like Kaffe and Jikes RVM; Kotlin, Scala, Groovy and other JVM languages are also available.
GNOME and KDE are popular desktop environments and provide a framework for developing applications. These projects are based on the GTK and Qt widget toolkits, respectively, which can also be used independently of the larger framework. Both support a wide variety of languages. There are a number of Integrated development environments available including Anjuta, Code::Blocks, CodeLite, Eclipse, Geany, ActiveState Komodo, KDevelop, Lazarus, MonoDevelop, NetBeans, and Qt Creator, while the long-established editors Vim, nano and Emacs remain popular.
Hardware support
The Linux kernel is a widely ported operating system kernel, available for devices ranging from mobile phones to supercomputers; it runs on a highly diverse range of computer architectures, including ARM-based Android smartphones and the IBM Z mainframes. Specialized distributions and kernel forks exist for less mainstream architectures; for example, the ELKS kernel fork can run on Intel 8086 or Intel 80286 16-bit microprocessors, while the μClinux kernel fork may run on systems without a memory management unit. The kernel also runs on architectures that were only ever intended to use a proprietary manufacturer-created operating system, such as Macintosh computers (with PowerPC, Intel, and Apple silicon processors), PDAs, video game consoles, portable music players, and mobile phones.
Linux has a reputation for supporting old hardware very well by maintaining standardized drivers for a long time. There are several industry associations and hardware conferences devoted to maintaining and improving support for diverse hardware under Linux, such as FreedomHEC. Over time, support for different hardware has improved in Linux, resulting in any off-the-shelf purchase having a "good chance" of being compatible.
In 2014, a new initiative was launched to automatically collect a database of all tested hardware configurations.
Uses
Market share and uptake
Many quantitative studies of free/open-source software focus on topics including market share and reliability, with numerous studies specifically examining Linux. The Linux market is growing, and the Linux operating system market size is expected to see a growth of 19.2% by 2027, reaching $15.64 billion, compared to $3.89 billion in 2019. Analysts project a Compound Annual Growth Rate (CAGR) of 13.7% between 2024 and 2032, culminating in a market size of US$34.90 billion by the latter year. Analysts and proponents attribute the relative success of Linux to its security, reliability, low cost, and freedom from vendor lock-in.
Desktops and laptops
According to web server statistics (that is, based on the numbers recorded from visits to websites by client devices), in October 2024, the estimated market share of Linux on desktop computers was around 4.3%. In comparison, Microsoft Windows had a market share of around 73.4%, while macOS covered around 15.5%.
Web servers
W3Cook publishes stats that use the top 1,000,000 Alexa domains, which estimate that 96.55% of web servers run Linux, 1.73% run Windows, and 1.72% run FreeBSD.
W3Techs publishes stats that use the top 10,000,000 Alexa domains and the top 1,000,000 Tranco domains, updated monthly and estimate that Linux is used by 39% of the web servers, versus 21.9% being used by Microsoft Windows. 40.1% used other types of Unix.
IDC's Q1 2007 report indicated that Linux held 12.7% of the overall server market at that time; this estimate was based on the number of Linux servers sold by various companies, and did not include server hardware purchased separately that had Linux installed on it later.
As of 2024, estimates suggest Linux accounts for at least 80% of the public cloud workload, partly thanks to its widespread use in platforms like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform.ZDNet report that 96.3% of the top one million web servers are running Linux.
W3Techs state that Linux powers at least 39.2% of websites whose operating system is known, with other estimates saying 55%.
Mobile devices
Android, which is based on the Linux kernel, has become the dominant operating system for smartphones. In April 2023, 68.61% of mobile devices accessing websites using StatCounter were from Android. Android is also a popular operating system for tablets, being responsible for more than 60% of tablet sales . According to web server statistics, Android has a market share of about 71%, with iOS holding 28%, and the remaining 1% attributed to various niche platforms.
Film production
For years, Linux has been the platform of choice in the film industry. The first major film produced on Linux servers was 1997's Titanic. Since then major studios including DreamWorks Animation, Pixar, Weta Digital, and Industrial Light & Magic have migrated to Linux. According to the Linux Movies Group, more than 95% of the servers and desktops at large animation and visual effects companies use Linux.
Use in government
Linux distributions have also gained popularity with various local and national governments. News of the Russian military creating its own Linux distribution has also surfaced, and has come to fruition as the G.H.ost Project. The Indian state of Kerala has gone to the extent of mandating that all state high schools run Linux on their computers. China uses Linux exclusively as the operating system for its Loongson processor family to achieve technology independence. In Spain, some regions have developed their own Linux distributions, which are widely used in education and official institutions, like gnuLinEx in Extremadura and Guadalinex in Andalusia. France and Germany have also taken steps toward the adoption of Linux. North Korea's Red Star OS, developed , is based on a version of Fedora Linux.
Copyright, trademark, and naming
The Linux kernel is licensed under the GNU General Public License (GPL), version 2. The GPL requires that anyone who distributes software based on source code under this license must make the originating source code (and any modifications) available to the recipient under the same terms. Other key components of a typical Linux distribution are also mainly licensed under the GPL, but they may use other licenses; many libraries use the GNU Lesser General Public License (LGPL), a more permissive variant of the GPL, and the X.Org implementation of the X Window System uses the MIT License.
Torvalds states that the Linux kernel will not move from version 2 of the GPL to version 3. He specifically dislikes some provisions in the new license which prohibit the use of the software in digital rights management. It would also be impractical to obtain permission from all the copyright holders, who number in the thousands.
A 2001 study of Red Hat Linux 7.1 found that this distribution contained 30 million source lines of code. Using the Constructive Cost Model, the study estimated that this distribution required about eight thousand person-years of development time. According to the study, if all this software had been developed by conventional proprietary means, it would have cost about to develop in in the United States. Most of the source code (71%) was written in the C programming language, but many other languages were used, including C++, Lisp, assembly language, Perl, Python, Fortran, and various shell scripting languages. Slightly over half of all lines of code were licensed under the GPL. The Linux kernel itself was 2.4 million lines of code, or 8% of the total.
In a later study, the same analysis was performed for Debian version 4.0 (etch, which was released in 2007). This distribution contained close to 283 million source lines of code, and the study estimated that it would have required about seventy three thousand man-years and cost (in dollars) to develop by conventional means.
In the United States, the name Linux is a trademark registered to Linus Torvalds. Initially, nobody registered it. However, on August 15, 1994, William R. Della Croce Jr. filed for the trademark Linux, and then demanded royalties from Linux distributors. In 1996, Torvalds and some affected organizations sued him to have the trademark assigned to Torvalds, and, in 1997, the case was settled. The licensing of the trademark has since been handled by the Linux Mark Institute (LMI). Torvalds has stated that he trademarked the name only to prevent someone else from using it. LMI originally charged a nominal sublicensing fee for use of the Linux name as part of trademarks, but later changed this in favor of offering a free, perpetual worldwide sublicense.
The Free Software Foundation (FSF) prefers GNU/Linux as the name when referring to the operating system as a whole, because it considers Linux distributions to be variants of the GNU operating system initiated in 1983 by Richard Stallman, president of the FSF. The foundation explicitly takes no issue over the name Android for the Android OS, which is also an operating system based on the Linux kernel, as GNU is not a part of it.
A minority of public figures and software projects other than Stallman and the FSF, notably distributions consisting of only free software, such as Debian (which had been sponsored by the FSF up to 1996), also use GNU/Linux when referring to the operating system as a whole. Most media and common usage, however, refers to this family of operating systems simply as Linux, as do many large Linux distributions (for example, SUSE Linux and Red Hat Enterprise Linux).
, about 8% to 13% of the lines of code of the Linux distribution Ubuntu (version "Natty") is made of GNU components (the range depending on whether GNOME is considered part of GNU); meanwhile, 6% is taken by the Linux kernel, increased to 9% when including its direct dependencies.
See also
Comparison of Linux distributions
Comparison of open-source and closed-source software
Comparison of operating systems
Comparison of X Window System desktop environments
Criticism of Linux
Linux kernel version history
Linux Documentation Project
Linux From Scratch
Linux Software Map
List of Linux distributions
List of games released on Linux
List of operating systems
Loadable kernel module
Usage share of operating systems
Timeline of operating systems
Notes
References
External links
Graphical map of Linux Internals (archived)
Linux kernel website and archives
The History of Linux in GIT Repository Format 1992–2010 (archived)
1991 software
Computing platforms
Cross-platform software
Finnish inventions
Free software programmed in C
Linus Torvalds
Operating systems
Unix variants
Open source projects | Linux | Technology | 8,195 |
26,171,620 | https://en.wikipedia.org/wiki/Thure%20E.%20Cerling | Thure E. Cerling (born 1949) is a Distinguished Professor of Geology and Geophysics and a Distinguished Professor of Biology at the University of Utah. Cerling is a leading expert in the evolution of modern landscapes including modern mammals and their associated grassland ecologies and stable isotope analyses of the atmosphere. Cerling lives in Salt Lake City, Utah.
"A single hair can determine a person's location during the past weeks or even years" – Thure E. Cerling
Cerling's research interests are primarily focused on Earth surface geochemistry processes and on the geological record of ecological change. Particularly, working on conservation biology, Cerling has analyzed the modern animal diet and physiology by using stable isotopes as natural tracers as well as studying dietary changes of different mammalian lineages extending over millions of years.
Emphasizing continental ecologies of lakes and modern soils and ecosystems, Cerling has written extensievely about the evolution of ecosystems, the inception and strengthening of monsoons, and the atmosphere over geological time scales through evidence gathered about the fractionation of stable isotopes in these systems.
Current research work includes a focus on the development of landforms in semi-arid regions, the geology of Old World paleo-anthropologic sites and on contaminant migration in surface and ground waters, including the use of tritium and helium as hydrological tracers.
Together with James Ehleringer, he established the Stable Isotope Biogeochemistry and Ecology (IsoCamp) summer course at the University of Utah, which "trains students in the fundamental environmental and biological theory underlying isotope fractionation processes across a broad spectrum of ecological and environmental applications".
Early life
Thure E. Cerling received his Bachelor of Science degree in geology and chemistry from Iowa State University, in Ames, Iowa, in 1972, and, in 1973, his Master of Science in geology from Iowa State. In 1977 he was awarded a Ph.D. in geology from the University of California at Berkeley. From 1977 to 1979 he worked as a research scientist at Oak Ridge National Laboratory and, from 1979 he has been a member of the University of Utah's faculty.
Global ecological changes
With the publication of "Expansion of C4 ecosystems as an indicator of global ecological change in the late Miocene" in 1993, Cerling, helped by Yang Wang and Jay Quade, made relevant studies relatively to carbon isotopes. Thanks to a deep analysis of palaeovegetation from palaeosols and palaeodiet measured in fossil tooth enamel, was demonstrated a global increase in the biomass of plants using C4 photosynthesis between 7 and 5 million years ago. The decrease of atmospheric concentrations over the history below a threshold that favored the C3-photosynthesizing plants was considered as a valid reason for the global expansion of C4 biomass.
The publication "Global vegetation change through the Miocene/Pliocene boundary" in 1997 confirmed these results, demonstrating even how at lower latitudes the change appeared to occur earlier because of the threshold for C3 photosynthesis is higher at warmer temperatures.
Give me a hair and I'll tell you where you have been
Thure Cerling and James Ehleringer, a biology professor at the University of Utah, founded Isoforensics in 2003, a company with the aim of interpreting the stable isotope composition of various biological and synthetic materials. This was the first step for the discovery they made which was first published on February 25, 2008, by the "Proceedings of the National Academy of Sciences" with the title "Hydrogen and oxygen isotope ratios in human hair are related to geography".
To know where people have been and where they lived for a while are information that became available by analyzing the stable isotope composition of their scalp hair. Cerling discovered that a strand of hair could provide valuable clues about a person's travels by studying the variation of hydrogen-2 (δ2H) and oxygen-18 (δ18O) isotopes and comparing them to the ones in the drinking water. The extent of the information that can be deduced depends on the length of the hair: the longer is the hair, the greater is the extraction of information. The variation with geography of isotope concentrations is linked with precipitations, cloud temperatures and with the amount of water that evaporates from soil and plants. When clouds move off the ocean towards inland the ratios of oxygen-18 to oxygen-16 and hydrogen-2 to hydrogen-1 tend to decrease because of the rain water with oxygen-18 and hydrogen-2, being heavier, tends to fall first.
Samples of tap water were collected from more than 600 cities across the United States as well as hair samples from the barbershops in 65 cities in 20 states. The comparison showed that both hair and drinking water samples had the same isotopic variations. In order to display these information, the scientists produced color-coded maps based on the correlation of the isotopes in hair to those in drinking water. This maps show how ratios of hydrogen and oxygen isotopes in scalp hair vary in different areas of the United States. It was so proved that the water drank by a human being leaves in the hair an evidence which contain oxygen and hydrogen isotopes equal to the ones in the tap water.
This technique would have been a new tool for policemen, anthropologists, archaeologists and doctors.
The elephant tail hair: how many things they can say!
Professor Cerling, helped by James Ehleringer and Christopher Remien (two University of Utah colleagues), George Wittemyer of Colorado State University and member of "Save the Elephants" in Nairobi, and Iain Douglas-Hamilton, who founded the association "Save the Elephants", conducted a research around the Samburu and Buffalo Springs national reserves in northern Kenya analyzing carbon and other stable isotopes in elephant tail hair to discover where and what Victoria, Anastasia and Cleopatra, three daughters of a mother elephant named Queen Elizabeth, usually eat over a six-years period (2000–2006). In order to monitor their life, the elephants were equipped with a Global Positioning System that recorded their positions every hour for the whole research period. For getting the sample of tail hair, elephants were immobilized with drug-filled dart guns when necessary. Considering that the hair grows about an inch per month, a single hair contained isotopic information to diet during an 18-month period.
Wet and dry seasons: different responses
The analysis of ratios of carbon-13 to carbon-12 along the length of a single elephant hair led Cerling and his crew to understand the elephants' diet. During the wet season, after the grass had grown long enough for elephants to grab with their trunks, their tail hair showed the presence of different form of carbon, indicating a high amount of high-protein grass. On the other hand, during the dry season, the results obtained by the analysis of the hair pointed out how elephants had switched over to shrubs and trees.
Birth and Cattle
For what concern the Samburu-Buffalo Springs, five weeks after the rainy season had started, the grass became rich in nutrients and the females were most likely to conceive, giving birth 22 months later, just in time for another rainy season to provide nutrients to the grass they would have eaten: the cycle could restart.
The research also pointed out how developed is the competition between elephants and cattle: during the typical wet season diet of elephants, the overgrazing by cattle caused the grass to be very short, resulting in a limited access to it for elephants, out-competing them. This situation could have influenced the elephants' ability to bulk up for pregnancy.
Behavior
All these analyses pointed out even that there are some elephant families friendlier than others and showed how there are dominant families that settle down in the best places, where there is plenty of food and water.
Publications by Cerling
Thure Cerling; Iain Douglas-Hamilton; Lee Siegel: "Elephant Tracks" University of Utah News Release, January 2, 2006.
References
External links
www.scientificblogging.com Crime fighting tool hair reveals where murder victims drank water
Chemistry World - News - 2008 February
1949 births
Living people
Geochemistry
Members of the United States National Academy of Sciences
Fellows of the American Association for the Advancement of Science
University of Utah faculty
Iowa State University alumni
UC Berkeley College of Letters and Science alumni
Oak Ridge National Laboratory people | Thure E. Cerling | Chemistry | 1,694 |
22,862,812 | https://en.wikipedia.org/wiki/Red%20heat | The practice of using colours to determine the temperature of a piece of (usually) ferrous metal comes from blacksmithing. Long before thermometers were widely available, it was necessary to know what state the metal was in for heat treating it and the only way to do this was to heat it up to a colour which was known to be best for the work.
Chapman
According to Chapman's Workshop Technology, the colours which can be observed in steel are:
Stirling
In 1905, Stirling Consolidated Boiler Company published a slightly different set of values:
See also
Black-body radiation
Color temperature
Incandescence
Notes
References
Metallurgy
Temperature | Red heat | Physics,Chemistry,Materials_science,Engineering | 130 |
72,738,124 | https://en.wikipedia.org/wiki/IMGT | IMGT or the international ImMunoGeneTics information system is a collection of databases and resources for immunoinformatics, particularly the V, D, J, and C gene sequences, as well as a providing other tools and data related to the adaptive immune system. IMGT/LIGM-DB, the first and still largest database hosted as part of IMGT contains reference nucleotide sequences for 360 species' T-cell receptor and immunoglobulin molecules, as of 2023. These genes encode the proteins which are the foundation of adaptive immunity, which allows highly specific recognition and memory of pathogens.
History
IMGT was founded in June, 1989, by Marie-Paule Lefranc, an immunologist working at University of Montpellier. The project was presented to the 10th Human Genome Mapping Workshop, and resulted in the recognition of V, D, J, and C regions as genes. The first resource created was IMGT/LIGM-DB, a reference for nucleotide sequences of T-cell receptor and immunoglobulin of humans, and later vertebrate species. IMGT was created under the auspices of Laboratoire d'ImmunoGénétique Moléculaire at the University of Montpellier as well as French National Centre for Scientific Research (CNRS).
As both T-cell receptors and immunoglobulin molecules are built through a process of recombination of nucleotide sequences, the annotation of the building block regions and their role is unique within the genome. To standardize terminology and references, the IMGT-NC was created in 1992 and recognized by the International Union of Immunological Societies as a nomenclature subcommittee. Other tools include IMGT/Collier-de-Perles, a method for two dimensional representation of receptor amino acid sequences, and IMGT/mAb-DB, a database of monoclonal antibodies. Now maintained by the HLA Informatics Group, the primary reference for human HLA, IPD-IMGT/HLA Database, originated in part with IMGT. It was merged with the Immuno Polymorphism Database in 2003 to form the current reference.
Since 2015, IMGT has been headed by Sofia Kossida.
See also
Open science data
Computational immunology
Immunomics
References
Genetics databases
Bioinformatics
Biological databases | IMGT | Engineering,Biology | 481 |
173,965 | https://en.wikipedia.org/wiki/3D%20rotation%20group | In mechanics and geometry, the 3D rotation group, often denoted SO(3), is the group of all rotations about the origin of three-dimensional Euclidean space under the operation of composition.
By definition, a rotation about the origin is a transformation that preserves the origin, Euclidean distance (so it is an isometry), and orientation (i.e., handedness of space). Composing two rotations results in another rotation, every rotation has a unique inverse rotation, and the identity map satisfies the definition of a rotation. Owing to the above properties (along composite rotations' associative property), the set of all rotations is a group under composition.
Every non-trivial rotation is determined by its axis of rotation (a line through the origin) and its angle of rotation. Rotations are not commutative (for example, rotating R 90° in the x-y plane followed by S 90° in the y-z plane is not the same as S followed by R), making the 3D rotation group a nonabelian group. Moreover, the rotation group has a natural structure as a manifold for which the group operations are smoothly differentiable, so it is in fact a Lie group. It is compact and has dimension 3.
Rotations are linear transformations of and can therefore be represented by matrices once a basis of has been chosen. Specifically, if we choose an orthonormal basis of , every rotation is described by an orthogonal 3 × 3 matrix (i.e., a 3 × 3 matrix with real entries which, when multiplied by its transpose, results in the identity matrix) with determinant 1. The group SO(3) can therefore be identified with the group of these matrices under matrix multiplication. These matrices are known as "special orthogonal matrices", explaining the notation SO(3).
The group SO(3) is used to describe the possible rotational symmetries of an object, as well as the possible orientations of an object in space. Its representations are important in physics, where they give rise to the elementary particles of integer spin.
Length and angle
Besides just preserving length, rotations also preserve the angles between vectors. This follows from the fact that the standard dot product between two vectors u and v can be written purely in terms of length (see the law of cosines):
It follows that every length-preserving linear transformation in preserves the dot product, and thus the angle between vectors. Rotations are often defined as linear transformations that preserve the inner product on , which is equivalent to requiring them to preserve length. See classical group for a treatment of this more general approach, where appears as a special case.
Orthogonal and rotation matrices
Every rotation maps an orthonormal basis of to another orthonormal basis. Like any linear transformation of finite-dimensional vector spaces, a rotation can always be represented by a matrix. Let be a given rotation. With respect to the standard basis of the columns of are given by . Since the standard basis is orthonormal, and since preserves angles and length, the columns of form another orthonormal basis. This orthonormality condition can be expressed in the form
where denotes the transpose of and is the identity matrix. Matrices for which this property holds are called orthogonal matrices. The group of all orthogonal matrices is denoted , and consists of all proper and improper rotations.
In addition to preserving length, proper rotations must also preserve orientation. A matrix will preserve or reverse orientation according to whether the determinant of the matrix is positive or negative. For an orthogonal matrix , note that implies , so that . The subgroup of orthogonal matrices with determinant is called the special orthogonal group, denoted .
Thus every rotation can be represented uniquely by an orthogonal matrix with unit determinant. Moreover, since composition of rotations corresponds to matrix multiplication, the rotation group is isomorphic to the special orthogonal group .
Improper rotations correspond to orthogonal matrices with determinant , and they do not form a group because the product of two improper rotations is a proper rotation.
Group structure
The rotation group is a group under function composition (or equivalently the product of linear transformations). It is a subgroup of the general linear group consisting of all invertible linear transformations of the real 3-space .
Furthermore, the rotation group is nonabelian. That is, the order in which rotations are composed makes a difference. For example, a quarter turn around the positive x-axis followed by a quarter turn around the positive y-axis is a different rotation than the one obtained by first rotating around y and then x.
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem.
Complete classification of finite subgroups
The finite subgroups of are completely classified.
Every finite subgroup is isomorphic to either an element of one of two countably infinite families of planar isometries: the cyclic groups or the dihedral groups , or to one of three other groups: the tetrahedral group , the octahedral group , or the icosahedral group .
Axis of rotation
Every nontrivial proper rotation in 3 dimensions fixes a unique 1-dimensional linear subspace of which is called the axis of rotation (this is Euler's rotation theorem). Each such rotation acts as an ordinary 2-dimensional rotation in the plane orthogonal to this axis. Since every 2-dimensional rotation can be represented by an angle φ, an arbitrary 3-dimensional rotation can be specified by an axis of rotation together with an angle of rotation about this axis. (Technically, one needs to specify an orientation for the axis and whether the rotation is taken to be clockwise or counterclockwise with respect to this orientation).
For example, counterclockwise rotation about the positive z-axis by angle φ is given by
Given a unit vector n in and an angle φ, let R(φ, n) represent a counterclockwise rotation about the axis through n (with orientation determined by n). Then
R(0, n) is the identity transformation for any n
R(φ, n) = R(−φ, −n)
R( + φ, n) = R( − φ, −n).
Using these properties one can show that any rotation can be represented by a unique angle φ in the range 0 ≤ φ ≤ and a unit vector n such that
n is arbitrary if φ = 0
n is unique if 0 < φ <
n is unique up to a sign if φ = (that is, the rotations R(, ±n) are identical).
In the next section, this representation of rotations is used to identify SO(3) topologically with three-dimensional real projective space.
Topology
The Lie group SO(3) is diffeomorphic to the real projective space
Consider the solid ball in of radius (that is, all points of of distance or less from the origin). Given the above, for every point in this ball there is a rotation, with axis through the point and the origin, and rotation angle equal to the distance of the point from the origin. The identity rotation corresponds to the point at the center of the ball. Rotations through an angle between 0 and (not including either) are on the same axis at the same distance. Rotation through angles between 0 and − correspond to the point on the same axis and distance from the origin but on the opposite side of the origin. The one remaining issue is that the two rotations through and through − are the same. So we identify (or "glue together") antipodal points on the surface of the ball. After this identification, we arrive at a topological space homeomorphic to the rotation group.
Indeed, the ball with antipodal surface points identified is a smooth manifold, and this manifold is diffeomorphic to the rotation group. It is also diffeomorphic to the real 3-dimensional projective space so the latter can also serve as a topological model for the rotation group.
These identifications illustrate that SO(3) is connected but not simply connected. As to the latter, in the ball with antipodal surface points identified, consider the path running from the "north pole" straight through the interior down to the south pole. This is a closed loop, since the north pole and the south pole are identified. This loop cannot be shrunk to a point, since no matter how it is deformed, the start and end point have to remain antipodal, or else the loop will "break open". In terms of rotations, this loop represents a continuous sequence of rotations about the z-axis starting (by example) at the identity (center of the ball), through the south pole, jumping to the north pole and ending again at the identity rotation (i.e., a series of rotation through an angle φ where φ runs from 0 to 2).
Surprisingly, running through the path twice, i.e., running from the north pole down to the south pole, jumping back to the north pole (using the fact that north and south poles are identified), and then again running from the north pole down to the south pole, so that φ runs from 0 to 4, gives a closed loop which can be shrunk to a single point: first move the paths continuously to the ball's surface, still connecting north pole to south pole twice. The second path can then be mirrored over to the antipodal side without changing the path at all. Now we have an ordinary closed loop on the surface of the ball, connecting the north pole to itself along a great circle. This circle can be shrunk to the north pole without problems. The plate trick and similar tricks demonstrate this practically.
The same argument can be performed in general, and it shows that the fundamental group of SO(3) is the cyclic group of order 2 (a fundamental group with two elements). In physics applications, the non-triviality (more than one element) of the fundamental group allows for the existence of objects known as spinors, and is an important tool in the development of the spin–statistics theorem.
The universal cover of SO(3) is a Lie group called Spin(3). The group Spin(3) is isomorphic to the special unitary group SU(2); it is also diffeomorphic to the unit 3-sphere S3 and can be understood as the group of versors (quaternions with absolute value 1). The connection between quaternions and rotations, commonly exploited in computer graphics, is explained in quaternions and spatial rotations. The map from S3 onto SO(3) that identifies antipodal points of S3 is a surjective homomorphism of Lie groups, with kernel {±1}. Topologically, this map is a two-to-one covering map. (See the plate trick.)
Connection between SO(3) and SU(2)
In this section, we give two different constructions of a two-to-one and surjective homomorphism of SU(2) onto SO(3).
Using quaternions of unit norm
The group is isomorphic to the quaternions of unit norm via a map given by
restricted to where , , , and , .
Let us now identify with the span of . One can then verify that if is in and is a unit quaternion, then
Furthermore, the map is a rotation of Moreover, is the same as . This means that there is a homomorphism from quaternions of unit norm to the 3D rotation group .
One can work this homomorphism out explicitly: the unit quaternion, , with
is mapped to the rotation matrix
This is a rotation around the vector by an angle , where and . The proper sign for is implied, once the signs of the axis components are fixed. The is apparent since both and map to the same .
Using Möbius transformations
The general reference for this section is . The points on the sphere
can, barring the north pole , be put into one-to-one bijection with points {{math|1=S(P) = P}} on the plane defined by , see figure. The map is called stereographic projection.
Let the coordinates on be . The line passing through and can be parametrized as
Demanding that the of equals , one finds
We have Hence the map
where, for later convenience, the plane is identified with the complex plane
For the inverse, write as
and demand to find and thus
If is a rotation, then it will take points on to points on by its standard action on the embedding space By composing this action with one obtains a transformation of ,
Thus is a transformation of associated to the transformation of .
It turns out that represented in this way by can be expressed as a matrix (where the notation is recycled to use the same name for the matrix as for the transformation of it represents). To identify this matrix, consider first a rotation about the through an angle ,
Hence
which, unsurprisingly, is a rotation in the complex plane. In an analogous way, if is a rotation about the through an angle , then
which, after a little algebra, becomes
These two rotations, thus correspond to bilinear transforms of , namely, they are examples of Möbius transformations.
A general Möbius transformation is given by
The rotations, generate all of and the composition rules of the Möbius transformations show that any composition of translates to the corresponding composition of Möbius transformations. The Möbius transformations can be represented by matrices
since a common factor of cancels.
For the same reason, the matrix is not uniquely defined since multiplication by has no effect on either the determinant or the Möbius transformation. The composition law of Möbius transformations follow that of the corresponding matrices. The conclusion is that each Möbius transformation corresponds to two matrices .
Using this correspondence one may write
These matrices are unitary and thus . In terms of Euler angles<ref group="nb">This is effected by first applying a rotation through about the to take the to the line , the intersection between the planes and .
For the general case, one might use Ref.
The quaternion formulation of the composition of two rotations RB and RA also yields directly the rotation axis and angle of the composite rotation RC = RBRA.
Let the quaternion associated with a spatial rotation R is constructed from its rotation axis S and the rotation angle φ this axis. The associated quaternion is given by,
Then the composition of the rotation RR with RA is the rotation RC = RBRA with rotation axis and angle defined by the product of the quaternions
that is
Expand this product to obtain
Divide both sides of this equation by the identity, which is the law of cosines on a sphere,
and compute
This is Rodrigues' formula for the axis of a composite rotation defined in terms of the axes of the two rotations. He derived this formula in 1840 (see page 408).
The three rotation axes A, B, and C''' form a spherical triangle and the dihedral angles between the planes formed by the sides of this triangle are defined by the rotation angles.
Infinitesimal rotations
Realizations of rotations
We have seen that there are a variety of ways to represent rotations:
as orthogonal matrices with determinant 1,
by axis and rotation angle
in quaternion algebra with versors and the map 3-sphere S3 → SO(3) (see quaternions and spatial rotations)
in geometric algebra as a rotor
as a sequence of three rotations about three fixed axes; see Euler angles.
Spherical harmonics
The group of three-dimensional Euclidean rotations has an infinite-dimensional representation on the Hilbert space
where are spherical harmonics. Its elements are square integrable complex-valued functions on the sphere. The inner product on this space is given by
If is an arbitrary square integrable function defined on the unit sphere , then it can be expressed as
where the expansion coefficients are given by
The Lorentz group action restricts to that of and is expressed as
This action is unitary, meaning that
The can be obtained from the of above using Clebsch–Gordan decomposition, but they are more easily directly expressed as an exponential of an odd-dimensional -representation (the 3-dimensional one is exactly ). A formula for valid for all ℓ is given. In this case the space decomposes neatly into an infinite direct sum of irreducible odd finite-dimensional representations according to
This is characteristic of infinite-dimensional unitary representations of . If is an infinite-dimensional unitary representation on a separable Hilbert space, then it decomposes as a direct sum of finite-dimensional unitary representations. Such a representation is thus never irreducible. All irreducible finite-dimensional representations can be made unitary by an appropriate choice of inner product,
where the integral is the unique invariant integral over normalized to , here expressed using the Euler angles parametrization. The inner product inside the integral is any inner product on .
Generalizations
The rotation group generalizes quite naturally to n-dimensional Euclidean space, with its standard Euclidean structure. The group of all proper and improper rotations in n dimensions is called the orthogonal group O(n), and the subgroup of proper rotations is called the special orthogonal group SO(n), which is a Lie group of dimension .
In special relativity, one works in a 4-dimensional vector space, known as Minkowski space rather than 3-dimensional Euclidean space. Unlike Euclidean space, Minkowski space has an inner product with an indefinite signature. However, one can still define generalized rotations which preserve this inner product. Such generalized rotations are known as Lorentz transformations and the group of all such transformations is called the Lorentz group.
The rotation group SO(3) can be described as a subgroup of E+(3), the Euclidean group of direct isometries of Euclidean This larger group is the group of all motions of a rigid body: each of these is a combination of a rotation about an arbitrary axis and a translation, or put differently, a combination of an element of SO(3) and an arbitrary translation.
In general, the rotation group of an object is the symmetry group within the group of direct isometries; in other words, the intersection of the full symmetry group and the group of direct isometries. For chiral objects it is the same as the full symmetry group.
See also
Orthogonal group
Angular momentum
Coordinate rotations
Charts on SO(3)
Representations of SO(3)
Euler angles
Rodrigues' rotation formula
Infinitesimal rotation
Pin group
Quaternions and spatial rotations
Rigid body
Spherical harmonics
Plane of rotation
Lie group
Pauli matrix
Plate trick
Three-dimensional rotation operator
Footnotes
References
Bibliography
(translation of the original 1932 edition, Die Gruppentheoretische Methode in Der Quantenmechanik'').
.
Lie groups
Rotational symmetry
Rotation in three dimensions
Euclidean solid geometry
3-manifolds | 3D rotation group | Physics,Mathematics | 3,931 |
56,374,292 | https://en.wikipedia.org/wiki/Type%20theory%20with%20records | Type theory with records is a formal semantics representation framework, using records to express type theory types. It has been used in natural language processing, principally computational semantics and dialogue systems.
Syntax
A record type is a set of fields. A field is a pair consisting of a label and a type. Within a record type, field labels are unique. The witness of a record type is a record. A record is a similar set of fields, but fields contain objects instead of types. The object in each field must be of the type declared in the corresponding field in the record type.
Basic type:
Object:
Ptype:
Object:
where and are individuals (type ), is proof that is a boy, etc.
References
Type theory
Semantics | Type theory with records | Mathematics | 148 |
7,536,770 | https://en.wikipedia.org/wiki/Dorian%20M.%20Goldfeld | Dorian Morris Goldfeld (born January 21, 1947) is an American mathematician working in analytic number theory and automorphic forms at Columbia University.
Professional career
Goldfeld received his B.S. degree in 1967 from Columbia University. His doctoral dissertation, entitled "Some Methods of Averaging in the Analytical Theory of Numbers", was completed under the supervision of Patrick X. Gallagher in 1969, also at Columbia. He has held positions at the University of California at Berkeley (Miller Fellow, 1969–1971), Hebrew University (1971–1972), Tel Aviv University (1972–1973), Institute for Advanced Study (1973–1974), in Italy (1974–1976), at MIT (1976–1982), University of Texas at Austin (1983–1985) and Harvard (1982–1985). Since 1985, he has been a professor at Columbia University.
He is a member of the editorial board of Acta Arithmetica and of The Ramanujan Journal. On January 1, 2018 he became the Editor-in-Chief of the Journal of Number Theory.
He is a co-founder and board member of Veridify Security, formerly SecureRF, a corporation that has developed the world's first linear-based security solutions.
Goldfeld advised several doctoral students including M. Ram Murty. In 1986, he brought Shou-Wu Zhang to the United States to study at Columbia.
Research interests
Goldfeld's research interests include various topics in number theory. In his thesis, he proved a version of Artin's conjecture on primitive roots on the average without the use of the Riemann Hypothesis.
In 1976, Goldfeld provided an ingredient for the effective solution of Gauss's class number problem for imaginary quadratic fields. Specifically, he proved an effective lower bound for the class number of an imaginary quadratic field assuming the existence of an elliptic curve whose L-function had a zero of order at least 3 at . (Such a curve was found soon after by Gross and Zagier). This effective lower bound then allows the determination of all imaginary fields with a given class number after a finite number of computations.
His work on the Birch and Swinnerton-Dyer conjecture includes the proof of an estimate for a partial Euler product associated to an elliptic curve, bounds for the
order of the Tate–Shafarevich group.
Together with his collaborators, Dorian Goldfeld has introduced the theory of multiple Dirichlet series, objects that extend the fundamental Dirichlet series in one variable.
He has also made contributions to the understanding of Siegel zeroes, to the ABC conjecture, to modular forms on , and to cryptography (Arithmetica cipher, Anshel–Anshel–Goldfeld key exchange).
Together with his wife, Dr. Iris Anshel, and father-in-law, Dr. Michael Anshel, both mathematicians, Dorian Goldfeld founded the field of braid group cryptography.
Awards and honors
In 1987 he received the Frank Nelson Cole Prize in Number Theory, one of the prizes in Number Theory, for his solution of Gauss's class number problem for imaginary quadratic fields. He has also held the Sloan Fellowship (1977–1979) and in 1985 he received the Vaughan prize. In 1986 he was an invited speaker at the International Congress of Mathematicians in Berkeley. In April 2009 he was elected a Fellow of the American Academy of Arts and Sciences. In 2012 he became a fellow of the American Mathematical Society.
Selected works
References
External links
Dorian Goldfeld's Home Page at Columbia University
20th-century American mathematicians
21st-century American mathematicians
Fellows of the American Academy of Arts and Sciences
Fellows of the American Mathematical Society
American number theorists
Columbia School of Engineering and Applied Science alumni
Columbia University faculty
University of California, Berkeley faculty
Academic staff of the Hebrew University of Jerusalem
Academic staff of Tel Aviv University
Institute for Advanced Study visiting scholars
University of Texas at Austin faculty
Harvard University Department of Mathematics faculty
1947 births
Living people
People from Marburg
Abc conjecture | Dorian M. Goldfeld | Mathematics | 805 |
30,306,025 | https://en.wikipedia.org/wiki/Plant%20tissue%20test | The nutrient content of a plant can be assessed by testing a sample of tissue from that plant. These tests are important in agriculture since fertilizer application can be fine-tuned if the plants nutrient status is known. Nitrogen most commonly limits plant growth and is the most managed nutrient.
Most useful times
Tissue tests are almost always useful, since they provide additional information about the physiology of the crop. Tissue tests are especially useful in certain situations;
For monitoring the nitrogen status of a crop throughout the growing season. Soil tests are commonly performed before planting.
In highly controlled environments, such as hydroponic production in greenhouses, crops require a constant feed of nutrients in their water supply. Even a transient lack of nutrients can reduce yields. Soil testing results cannot reveal actual nutrient uptake and nutrient mobility. Soil tests may be insufficient to manage crop nitrogen status. Soil testing may be more suitable when growing crops in slow-release composts and manures.
When there is a risk that a nutrient application blocks uptake or unlocks mobility of other nutrients. In over application this can lead to toxic conditions, such as during the application of poultry litter that contains micro nutrients such as copper in high concentrations.
To guarantee that nitrogen levels in the crop do not exceed a certain limit. High concentrations of nitrates has implications to human health because nitrates can be converted into nitrites in the human digestive tract. Nitrites can react with other compounds in the gut to form nitrosamines, which appear to be carcinogenic. Crops may contain high concentrations of nitrate when excess fertilizer is applied. This can be an issue in crops with high levels of nitrate uptake, such as spinach and lettuce.
Disadvantages of traditional tests
Traditional tissue tests are destructive tests where a sample is sent to a laboratory for analysis. Any laboratory test (soil or tissue test) performed by a commercial company will cost the grower a fee. Laboratory tests take at least a week to complete, usually 2 weeks. It takes time to dry the samples, send them to the lab, complete the lab-tests, and then return the results to the grower. This means the results may not be received by the grower until after the ideal time to take action. Nitrogen tissue tests that can be performed quickly in the field make tissue testing much more useful.
Another issue with laboratory tissue tests is that the results are often difficult to interpret.
Non-destructive tissue tests
Non-destructive tissue tests have advantages over traditional destructive tests. Non-destructive tissue tests can be performed easily in the field, and provide results much faster than laboratory tests.
To non-destructively assess nitrogen content, one can assess the chlorophyll content. Nitrogen content is linked to chlorophyll content because a molecule of chlorophyll contains four nitrogen atoms.
Chlorophyll content meters
Nitrogen deficiency can be detected with a chlorophyll content meter. The meters determine chlorophyll content by shining a light through a leaf inserted in a slot and measuring the amount of light transmitted.
Chlorophyll meters use different units of measure. For instance, while Minolta uses "SPAD units", the Dualex (produced by METOS® from Pessl Instruments GmbH) uses μg/cm² and ADC uses a Chlorophyll Content Index. All measure essentially the same thing, and conversion tables are available.
While traditional absorption instruments have been very popular with plant scientists and have proved to work well with broad leaf species, they do have limitations.
Limitations of absorption meters:
The sample must completely cover the measuring aperture. Any gaps will give false readings
The sample measured must be thin, so measuring light is not completely absorbed
The surface of the sample must be flat
The Kautsky induction effect limits repeated measurements at the same site.
Variation in measurements can be caused by mid ribs and veins
Linear correlation limited to below 300 mg/m2.
There are therefore samples which are not suitable for the absorption technique, these include small leaves, most CAM plants, conifer needles, fruit, algae on rocks, bryophytes, lichens and plant structures like stems and petioles. For these samples it is necessary to measure chlorophyll content using chlorophyll fluorescence.
In his scientific paper Gitelson (1999) states, "The ratio between chlorophyll fluorescence, at 735 nm and the wavelength range 700nm to 710 nm, F735/F700 was found to be linearly proportional to the chlorophyll content (with determination coefficient, r2, more than 0.95) and thus this ratio can be used as a precise indicator of chlorophyll content in plant leaves." The fluorescent ratio chlorophyll content meters use this technique to measure these more difficult samples.
Fluorescent ratio chlorophyll content meters have the following advantages:
They can measure small samples because the measuring aperture does not need to be filled
Measurements as high as 675 mg/m2 possible (only 300 mg/m2 with absorption technique)
Curved surfaces such as pine needles and petioles can be measured
Thick samples such as fruit and cacti can be measured
Multiple measurements can be made at the same site because there is no Kautsky effect
More consistent readings because leaf veins and mid ribs can be avoided
By measuring chlorophyll fluorescence, plant ecophysiology can be investigated. Chlorophyll fluorometers are used by plant researchers to assess plant stress.
Chlorophyll fluorometry
Chlorophyll fluorometers are designed to measure variable fluorescence of photosystem II, or PSII. With most types of plant stress, this variable fluorescence can be used to measure the level of plant stress. The most commonly used protocols include: Fv/Fm, a dark adapted protocol, Y(II) or ΔF/Fm’ a light adapted test that is used during steady state photosynthesis, and various OJIP, dark adapted protocols that follow different schools of thought. Longer fluorescence quenching protocols can also be used for plant stress measurement, but because the time required for a measurement is extremely long, only small plant populations can probably be tested. NPQ or non-photochemical quenching is the most popular of these quenching parameters, but other parameters and other quenching protocols are also used.
Another test protocol based on fluorescence is the OJIP test. This method analyses the increase in fluorescence emitted from dark-adapted leaves when they are illuminated. The rise in fluorescence during the first second of illumination follows a curve with intermediate peaks, called the O, J, I, and P steps. In addition, the K step appears during specific types of stress, such as N-deficiency. Research has shown the K step is able to measure N-stress.
See also
Soil test
Fertilizer
Fertility (soil)
Nitrogen deficiency
References
Fertilizers | Plant tissue test | Chemistry | 1,432 |
4,301,763 | https://en.wikipedia.org/wiki/Grain%20growth | In materials science, grain growth is the increase in size of grains (crystallites) in a material at high temperature. This occurs when recovery and recrystallisation are complete and further reduction in the internal energy can only be achieved by reducing the total area of grain boundary. The term is commonly used in metallurgy but is also used in reference to ceramics and minerals. The behaviors of grain growth is analogous to the coarsening behaviors of grains, which implied that both of grain growth and coarsening may be dominated by the same physical mechanism.
Importance of grain growth
The practical performances of polycrystalline materials are strongly affected by the formed microstructure inside, which is mostly dominated by grain growth behaviors. For example, most materials exhibit the Hall–Petch effect at room-temperature and so display a higher yield stress when the grain size is reduced (assuming abnormal grain growth has not taken place). At high temperatures the opposite is true since the open, disordered nature of grain boundaries means that vacancies can diffuse more rapidly down boundaries leading to more rapid Coble creep. Since boundaries are regions of high energy they make excellent sites for the nucleation of precipitates and other second-phases e.g. Mg–Si–Cu phases in some aluminium alloys or martensite platlets in steel. Depending on the second phase in question this may have positive or negative effects.
Rules of grain growth
Grain growth has long been studied primarily by the examination of sectioned, polished and etched samples under the optical microscope. Although such methods enabled the collection of a great deal of empirical evidence, particularly with regard to factors such as temperature or composition, the lack of crystallographic information limited the development of an understanding of the fundamental physics. Nevertheless, the following became well-established features of grain growth:
Grain growth occurs by the movement of grain boundaries and also by coalescence (i.e. like water droplets)
Grain growth competition between Ordered coalescence and the movement of grain boundaries
Boundary movement may be discontinuous and the direction of motion may change suddenly during abnormal grain growth.
One grain may grow into another grain whilst being consumed from the other side
The rate of consumption often increases when the grain is nearly consumed
A curved boundary typically migrates towards its centre of curvature
Classical driving force
The boundary between one grain and its neighbour (grain boundary) is a defect in the crystal structure and so it is associated with a certain amount of energy. As a result, there is a thermodynamic driving force for the total area of boundary to be reduced. If the grain size increases, accompanied by a reduction in the actual number of grains per volume, then the total area of grain boundary will be reduced.
In the classic theory, the local velocity of a grain boundary at any point is proportional to the local curvature of the grain boundary, i.e.:
,
where is the velocity of grain boundary, is grain boundary mobility (generally depends on orientation of two grains), is the grain boundary energy and is the sum of the two principal surface curvatures. For example, shrinkage velocity of a spherical grain embedded inside another grain is
,
where is radius of the sphere. This driving pressure is very similar in nature to the Laplace pressure that occurs in foams.
In comparison to phase transformations the energy available to drive grain growth is very low and so it tends to occur at much slower rates and is easily slowed by the presence of second phase particles or solute atoms in the structure.
Recently, in contrast to the classic linear relation between grain boundary velocity and curvature, grain boundary velocity and curvature are observed to be not correlated in Ni polycrystals, which conflicting results has been revealed and be theoretically interpreted by a general model of grain boundary (GB) migration in the previous literature. According to the general GB migration model, the classical linear relation can only be used in a specical case.
A general theory of grain growth
Development of theoretical models describing grain growth is an active field of research. Many models have been proposed for grain growth, but no theory has yet been put forth that has been independently validated to apply across the full range of conditions and many questions remain open. By no means is the following a comprehensive review. One recent theory of grain growth posits that normal grain growth only occurs in the polycrystalline systems with grain boundaries which have undergone roughening transitions, and abnormal and/or stagnant grain growth can only occur in the polycrystalline systems with non-zero GB (grain boundary) step free energy of grains. Other models explaining grain coarsening assert that disconnections are responsible for the motion of grain boundaries, and provide limited experimental evidence suggesting that they govern grain boundary migration and grain growth behavior. Other models have indicated that triple junctions play an important role in determining the grain growth behavior in many systems.
Ideal grain growth
Ideal grain growth is a special case of normal grain growth where boundary motion is driven only by local curvature of the grain boundary. It results in the reduction of the total amount of grain boundary surface area i.e. total energy of the system. Additional contributions to the driving force by e.g. elastic strains or temperature gradients are neglected. If it holds that the rate of growth is proportional to the driving force and that the driving force is proportional to the total amount of grain boundary energy, then it can be shown that the time t required to reach a given grain size is approximated by the equation
where d0 is the initial grain size, d is the final grain size and k is a temperature dependent constant given by an exponential law:
where k0 is a constant, T is the absolute temperature and Q is the activation energy for boundary mobility. Theoretically, the activation energy for boundary mobility should equal that for self-diffusion but this is often found not to be the case.
In general these equations are found to hold for ultra-high purity materials but rapidly fail when even tiny concentrations of solute are introduced.
Self-similarity
An old-standing topic in grain growth is the evolution of the grains size distribution. Inspired by the work of Lifshitz and Slyozov on Ostwald ripening, Hillert has suggested that in a normal grain growth process the size distribution function must converge to a self-similar solution, i.e. it becomes invariant when the grain size is scaled with a characteristic length of the system that is proportional to the average grain size .
Several simulation studies, however, have shown that the size distribution deviates from the Hillert's self-similar solution. Hence a search for a new possible self-similar solution was initiated that indeed led to a new class of self-similar distribution functions. Large-scale phase field simulations have shown that there is indeed a self-similar behavior possible within the new distribution functions. It was shown that the origin of the deviation from Hillert's distribution is indeed the geometry of grains specially when they are shrinking.
Normal vs abnormal
In common with recovery and recrystallisation, growth phenomena can be separated into continuous and discontinuous mechanisms. In the former the microstructure evolves from state A to B (in this case the grains get larger) in a uniform manner. In the latter, the changes occur heterogeneously and specific transformed and untransformed regions may be identified. Abnormal or discontinuous grain growth is characterised by a subset of grains growing at a high rate and at the expense of their neighbours and tends to result in a microstructure dominated by a few very large grains. In order for this to occur the subset of grains must possess some advantage over their competitors such as a high grain boundary energy, locally high grain boundary mobility, favourable texture or lower local second-phase particle density.
Factors hindering growth
If there are additional factors preventing boundary movement, such as Zener pinning by particles, then the grain size may be restricted to a much lower value than might otherwise be expected. This is an important industrial mechanism in preventing the softening of materials at high temperature.
Inhibition
Certain materials especially refractories which are processed at high temperatures end up with excessively large grain size and poor mechanical properties at room temperature. To mitigate this problem in a common sintering procedure, a variety of dopants are often used to inhibit grain growth.
References
F. J. Humphres and M. Hatherly (1995); Recrystallization and related annealing phenomena, Elsevier
Materials science
Metallurgy | Grain growth | Physics,Chemistry,Materials_science,Engineering | 1,725 |
63,771,069 | https://en.wikipedia.org/wiki/Arfiviricetes | Arfiviricetes is a class of viruses.
Orders
The following orders are recognized:
Baphyvirales
Cirlivirales
Cremevirales
Mulpavirales
Recrevirales
Rivendellvirales
Rohanvirales
References
External links
Single-stranded DNA viruses | Arfiviricetes | Biology | 58 |
57,186,853 | https://en.wikipedia.org/wiki/T-cell%20depletion | T-cell depletion (TCD) is the process of T cell removal or reduction, which alters the immune system and its responses. Depletion can occur naturally (i.e. in HIV) or be induced for treatment purposes. TCD can reduce the risk of graft-versus-host disease (GVHD), which is a common issue in transplants.
The idea that TCD of the allograft can eliminate GVHD was first introduced in 1958. In humans the first TCD was performed in severe combined immunodeficiency patients.
Depletion methods
T cell depletion methods can be broadly categorized into either physical or immunological. Examples of physical separation include using counterflow centrifugal elutriation, fractionation on density gradients, or the differential agglutination with lectins followed by rosetting with sheep red blood cells. Immunological methods utilize antibodies, either alone, in conjunction with homologous, heterologous, or rabbit complement factors which are directed against the T cells. In addition, these techniques can be used in combinations.
These techniques can be performed either in vivo, ex vivo, or in vitro. Ex vivo techniques enable a more accurate count of the T cells in a graft and also has the option to 'addback' a set number of T cells if necessary. Currently, ex vivo techniques most commonly employ positive or negative selection methods using immunomagnetic separation. In contrast, in-vivo TCD is performed using anti-T cell antibodies or, most recently, post-HSCT cyclophosphamide.
The method by which depletion occurs can heavily affect the results. Ex vivo TCD is predominantly used in GVHD prevention, where it offers the best results. However, complete TCD via ex vivo, especially in acute myeloid leukemia (AML), patients usually does not improve survival. In vivo depletion often uses monoclonal antibodies (eg, alemtuzumab) or heteroantisera. In haploidentical hematopoietic stem cell transplantation, in vivo TCD suppressed lymphocytes early on. However, the incidence rate of cytomegalovirus (CMV) reactivations is elevated. These problems can be overcome by combining TCD haploidentical graft with post-HSCT cyclophosphamide. In contrast, both in vivo TCD with alemtuzumab and in vitro TCD with CD34+ selection performed comparably.
Although TCD is beneficial to prevent GVHD there are some problems it can cause a delay in recovery of the immune system of the transplanted individual and a decreased Graft-versus-tumor effect. This problem is partially answered by more selective depletion, such as depletion of CD3+ or αβT-cell and CD19 B cell, which preserves other important cells of the immune system. Another method is addition of cells back into the graft, after a comprehensive TCD method, examples are re-introduction of natural killer cells (NK), γδ T-cells and T regulatory cells (Tregs).
Early on it was apparent that TCD was good for preventing GVHD, but also led to increased graft rejection, this problem can be solved by transplanting more hematopoietic stem cells. This procedure is called 'megadose transplantation' and it prevents rejection because the stem cells have an ability (i.e. veto cell killing) to protect themselves from the host's immune system. Experiments show that transplantation of other types of veto cells along with megadose haploidentical HSCT allows to reduce the toxicity of the conditioning regimen, which makes this treatment much safer and more applicable to many diseases. These veto cells can also exert graft vs tumor effect.
Role in disease
In HIV
HIV has been confirmed to target CD4+ T cells and destroy them, making T cell depletion an important hallmark of HIV. In comparison to HIV- individuals, CD4+ T cells proliferate at a higher rate in those who are HIV+. Apoptosis also occurs more frequently in HIV+ patients.
Depletion of regulatory T cells increases immune activation. Glut1 regulation is associated with the activation of CD4+ T cells, thus its expression can be used to track the loss of CD4+ T cells during HIV.
Antiretroviral therapy, the most common treatment for patients with HIV, has been shown to restore CD4+ T cell counts.
The body responds to T cell depletion by producing an equal amount of T cells. However, over time, an individual's immune system can no longer continue to replace CD4+ T cells. This is called the "tap and drain hypothesis."
In cancer
TCD's role in cancer increasing with the rise of immunotherapies being investigated, specifically those that target self-antigens. One example is antigen-specific CD4+ T cell tolerance, which serves as the primary mechanism restricting immunotherapeutic responses to the endogenous self antigen guanylyl cyclase c (GUCY2C) in colorectal cancer. However, in some cases, selective CD4+ T cell tolerance provides a unique therapeutic opportunity to maximize self antigen-targeted immune and antitumor responses without inducing autoimmunity by incorporating self antigen-independent CD4+ T cell epitopes into cancer vaccines.
In a mammary carcinoma model, depletion of CD25+ regulatory T cells increase the amount of CD8+CD11c+PD110, which target and kill the tumors.
In lupus
Phenotypic and functional characteristics of regulatory T cells in lupus patients do not differ from healthy patients. However, depletion of regulatory T cells results in more intense flares of systemic lupus erythematosus. The in vivo depletion of regulatory T cells is hypothesized to occur via early apoptosis induction, which follow exposure to self Ags that arise during the flare.
In murine cytomegalovirus (MCMV) infection
MCMV is a rare herpesvirus that can cause disseminated and fatal disease in the immunodeficient animals similar to the disease caused by human cytomegalovirus in immunodeficient humans. Depletion of CD8+ T cells prior to a MCMV infection effectively upregulates the antiviral activity of natural killer cells. Depletion post infection has no effect on the NK cells.
In arthritis
A preliminary study of the effect on TCD in arthritis in mice models has shown that regulatory T cells play an important role in delayed-type hypersensitivity arthritis (DTHA) inflammation. This occurs by TCD inducing increased neutrofils and activity of IL-17 and RANKL.
Treatment use
Haploidentical stem cell transplantation
TCD is heavily used in haploidentical stem cell transplantation (HSCT), a process in which cancer patients receive an infusion of healthy stem cells from a compatible donor to replenish their blood-forming elements.
In patients with Acute Myeloid Leukemia (AML) and in their first remission, ex vivo TCD greatly reduced the incidence rate of GVHD, though survival was comparable to conventional transplants.
Bone marrow transplantation
In allogeneic bone marrow transplants (BMT), the transplanted stem cells derive from the bone marrow. In cases where the donors are genetically similar, but not identical, risk of GVHD is increased. The first ex vivo TCD trials used monoclonal antibodies, but still had high incidence rates of GVHD. Additional treatment using complement or immunotoxins (along with anti-T-cell antibody) improved the depletion, thus increasing the prevention of GVHD. Depleting αβ T cells from the infused graft spares γδ T cells and NK cells promotes their homeostatic reconstitution, thus reducing the risk of GVHD.
In vitro TCD selectively with an anti-T12 monoclonal antibody lowers the rate of acute and chronic GVHD post allogeneic BMT. Further, immune suppressive medications are usually unnecessary if CD6+ T cells are removed from the donor marrow.
Patients can relapse even after a TCD allogeneic bone marrow transplant, though patients with chronic myelogenous leukemia (CML) who receive a donor lymphocyte infusion (DLI) can restore complete remission.
References
T cells
Immune system | T-cell depletion | Biology | 1,806 |
32,851 | https://en.wikipedia.org/wiki/Wiki | A wiki ( ) is a form of hypertext publication on the internet which is collaboratively edited and managed by its audience directly through a web browser. A typical wiki contains multiple pages that can either be edited by the public or limited to use within an organization for maintaining its internal knowledge base.
Wikis are powered by wiki software, also known as wiki engines. Being a form of content management system, these differ from other web-based systems such as blog software or static site generators in that the content is created without any defined owner or leader. Wikis have little inherent structure, allowing one to emerge according to the needs of the users. Wiki engines usually allow content to be written using a lightweight markup language and sometimes edited with the help of a rich-text editor. There are dozens of different wiki engines in use, both standalone and part of other software, such as bug tracking systems. Some wiki engines are free and open-source, whereas others are proprietary. Some permit control over different functions (levels of access); for example, editing rights may permit changing, adding, or removing material. Others may permit access without enforcing access control. Further rules may be imposed to organize content. In addition to hosting user-authored content, wikis allow those users to interact, hold discussions, and collaborate.
There are hundreds of thousands of wikis in use, both public and private, including wikis functioning as knowledge management resources, note-taking tools, community websites, and intranets. Ward Cunningham, the developer of the first wiki software, WikiWikiWeb, originally described wiki as "the simplest online database that could possibly work". "Wiki" (pronounced ) is a Hawaiian word meaning "quick".
The online encyclopedia project Wikipedia is the most popular wiki-based website, as well being one of the internet's most popular websites, having been ranked consistently as such since at least 2007. Wikipedia is not a single wiki but rather a collection of hundreds of wikis, with each one pertaining to a specific language. The English-language Wikipedia has the largest collection of articles, standing at as of .
Characteristics
In their 2001 book The Wiki Way: Quick Collaboration on the Web, Cunningham and co-author Bo Leuf described the essence of the wiki concept:
"A wiki invites all users—not just experts—to edit any page or to create new pages within the wiki website, using only a standard 'plain-vanilla' Web browser without any extra add-ons."
"Wiki promotes meaningful topic associations between different pages by making page link creation intuitively easy and showing whether an intended target page exists or not."
"A wiki is not a carefully crafted site created by experts and professional writers and designed for casual visitors. Instead, it seeks to involve the typical visitor/user in an ongoing process of creation and collaboration that constantly changes the website landscape."
Editing
Source editing
Some wikis will present users with an edit button or link directly on the page being viewed. This will open an interface for writing, formatting, and structuring page content. The interface may be a source editor, which is text-based and employs a lightweight markup language (also known as wikitext, wiki markup, or wikicode), or a visual editor. For example, in a source editor, starting lines of text with asterisks could create a bulleted list.
The syntax and features of wiki markup languages for denoting style and structure can vary greatly among implementations. Some allow the use of and , while others prevent the use of these to foster uniformity in appearance.
Example of syntax
A short section of Alice's Adventures in Wonderland rendered in wiki markup:
Visual editing
While wiki engines have traditionally offered source editing to users, in recent years some implementations have added a rich text editing mode. This is usually implemented, using JavaScript, as an interface which translates formatting instructions chosen from a toolbar into the corresponding wiki markup or HTML. This is generated and submitted to the server transparently, shielding users from the technical detail of markup editing and making it easier for them to change the content of pages. An example of such an interface is the VisualEditor in MediaWiki, the wiki engine used by Wikipedia. WYSIWYG editors may not provide all the features available in wiki markup, and some users prefer not to use them, so a source editor will often be available simultaneously.
Version history
Some wiki implementations keep a record of changes made to wiki pages, and may store every version of the page permanently. This allows authors to revert a page to an older version to rectify a mistake, or counteract a malicious or inappropriate edit to its content.
These stores are typically presented for each page in a list, called a "log" or "edit history", available from the page via a link in the interface. The list displays metadata for each revision to the page, such as the time and date of when it was stored, and the name of the person who created it, alongside a link to view that specific revision. A diff (short for "difference") feature may be available, which highlights the changes between any two revisions.
Edit summaries
The edit history view in many wiki implementations will include edit summaries written by users when submitting changes to a page. Similar to the function of a log message in a revision control system, an edit summary is a short piece of text which summarizes and perhaps explains the change, for example "Corrected grammar" or "Fixed table formatting to not extend past page width". It is not inserted into the article's main text.
Navigation
Traditionally, wikis offer free navigation between their pages via hypertext links in page text, rather than requiring users to follow a formal or structured navigation scheme. Users may also create indexes or table of contents pages, hierarchical categorization via a taxonomy, or other forms of ad hoc content organization. Wiki implementations can provide one or more ways to categorize or tag pages to support the maintenance of such index pages, such as a backlink feature which displays all pages that link to a given page. Adding categories or tags to a page makes it easier for other users to find it.
Most wikis allow the titles of pages to be searched amongst, and some offer full text search of all stored content.
Navigation between wikis
Some wiki communities have established navigational networks between each other using a system called WikiNodes. A WikiNode is a page on a wiki which describes and links to other, related wikis. Some wikis operate a structure of neighbors and delegates, wherein a neighbor wiki is one which discusses similar content or is otherwise of interest, and a delegate wiki is one which has agreed to have certain content delegated to it. WikiNode networks act as webrings which may be navigated from one node to another to find a wiki which addresses a specific subject.
Linking to and naming pages
The syntax used to create internal hyperlinks varies between wiki implementations. Beginning with the WikiWikiWeb in 1995, most wikis used camel case to name pages, which is when words in a phrase are capitalized and the spaces between them removed. In this system, the phrase "camel case" would be rendered as "CamelCase". In early wiki engines, when a page was displayed, any instance of a camel case phrase would be transformed into a link to another page named with the same phrase.
While this system made it easy to link to pages, it had the downside of requiring pages to be named in a form deviating from standard spelling, and titles of a single word required abnormally capitalizing one of the letters (e.g. "WiKi" instead of "Wiki"). Some wiki implementations attempt to improve the display of camel case page titles and links by reinserting spaces and possibly also reverting to lower case, but this simplistic method is not able to correctly present titles of mixed capitalization. For example, "Kingdom of France" as a page title would be written as "KingdomOfFrance", and displayed as "Kingdom Of France".
To avoid this problem, the syntax of wiki markup gained free links, wherein a term in natural language could be wrapped in special characters to turn it into a link without modifying it. The concept was given the name in its first implementation, in UseModWiki in February 2001. In that implementation, link terms were wrapped in a double set of square brackets, for example [[Kingdom of France]]. This syntax was adopted by a number of later wiki engines.
It is typically possible for users of a wiki to create links to pages that do not yet exist, as a way to invite the creation of those pages. Such links are usually differentiated visually in some fashion, such as being colored red instead of the default blue, which was the case in the original WikiWikiWeb, or by appearing as a question mark next to the linked words.
History
WikiWikiWeb was the first wiki. Ward Cunningham started developing it in 1994, and installed it on the Internet domain c2.com on March 25, 1995. Cunningham gave it the name after remembering a Honolulu International Airport counter employee telling him to take the "Wiki Wiki Shuttle" bus that runs between the airport's terminals, later observing that "I chose wiki-wiki as an alliterative substitute for 'quick' and thereby avoided naming this stuff quick-web."
Cunningham's system was inspired by his having used Apple's hypertext software HyperCard, which allowed users to create interlinked "stacks" of virtual cards. HyperCard, however, was single-user, and Cunningham was inspired to build upon the ideas of Vannevar Bush, the inventor of hypertext, by allowing users to "comment on and change one another's text." Cunningham says his goals were to link together people's experiences to create a new literature to document programming patterns, and to harness people's natural desire to talk and tell stories with a technology that would feel comfortable to those not used to "authoring".
Wikipedia became the most famous wiki site, launched in January 2001 and entering the top ten most popular websites in 2007. In the early 2000s, wikis were increasingly adopted in enterprise as collaborative software. Common uses included project communication, intranets, and documentation, initially for technical users. Some companies use wikis as their collaborative software and as a replacement for static intranets, and some schools and universities use wikis to enhance group learning. On March 15, 2007, the word wiki was listed in the online Oxford English Dictionary.
Alternative definitions
In the late 1990s and early 2000s, the word "wiki" was used to refer to both user-editable websites and the software that powers them, and the latter definition is still occasionally in use.
By 2014, Ward Cunningham's thinking on the nature of wikis had evolved, leading him to write that the word "wiki" should not be used to refer to a single website, but rather to a mass of user-editable pages or sites so that a single website is not "a wiki" but "an instance of wiki". In this concept of wiki federation, in which the same content can be hosted and edited in more than one location in a manner similar to distributed version control, the idea of a single discrete "wiki" no longer made sense.
Implementations
The software which powers a wiki may be implemented as a series of scripts which operate an existing web server, a standalone application server that runs on one or more web servers, or in the case of personal wikis, run as a standalone application on a single computer. Some wikis use flat file databases to store page content, while others use a relational database, as indexed database access is faster on large wikis, particularly for searching.
Hosting
Wikis can also be created on wiki hosting services (also known as wiki farms), where the server-side software is implemented by the wiki farm owner, and may do so at no charge in exchange for advertisements being displayed on the wiki's pages. Some hosting services offer private, password-protected wikis requiring authentication to access. Free wiki farms generally contain advertising on every page.
Trust and security
Access control
The four basic types of users who participate in wikis are readers, authors, wiki administrators and system administrators. System administrators are responsible for the installation and maintenance of the wiki engine and the container web server. Wiki administrators maintain content and, through having elevated privileges, are granted additional functions (including, for example, preventing edits to pages, deleting pages, changing users' access rights, or blocking them from editing).
Controlling changes
Wikis are generally designed with a soft security philosophy in which it is easy to correct mistakes or harmful changes, rather than attempting to prevent them from happening in the first place. This allows them to be very open while providing a means to verify the validity of recent additions to the body of pages. Most wikis offer a recent changes page which shows recent edits, or a list of edits made within a given time frame. Some wikis can filter the list to remove edits flagged by users as "minor" and automated edits. The version history feature allows harmful changes to be reverted quickly and easily.
Some wiki engines provide additional content control, allowing remote monitoring and management of a page or set of pages to maintain quality. A person willing to maintain pages will be alerted of modifications to them, allowing them to verify the validity of new editions quickly. Such a feature is often called a watchlist.
Some wikis also implement patrolled revisions, in which editors with the requisite credentials can mark edits as being legitimate. A flagged revisions system can prevent edits from going live until they have been reviewed.
Wikis may allow any person on the web to edit their content without having to register an account on the site first (anonymous editing), or require registration as a condition of participation. On implementations where an administrator is able to restrict editing of a page or group of pages to a specific group of users, they may have the option to prevent anonymous editing while allowing it for registered users.
Trustworthiness and reliability of content
Critics of publicly editable wikis argue that they could be easily tampered with by malicious individuals, or even by well-meaning but unskilled users who introduce errors into the content. Proponents maintain that these issues will be caught and rectified by a wiki's community of users. High editorial standards in medicine and health sciences articles, in which users typically use peer-reviewed journals or university textbooks as sources, have led to the idea of expert-moderated wikis. Wiki implementations retaining and allowing access to specific versions of articles has been useful to the scientific community, by allowing expert peer reviewers to provide links to trusted version of articles which they have analyzed.
Security
Trolling and cybervandalism on wikis, where content is changed to something deliberately incorrect or a hoax, offensive material or nonsense is added, or content is maliciously removed, can be a major problem. On larger wiki sites it is possible for such changes to go unnoticed for a long period.
In addition to using the approach of soft security for protecting themselves, larger wikis may employ sophisticated methods, such as bots that automatically identify and revert vandalism. For example, on Wikipedia, the bot ClueBot NG uses machine learning to identify likely harmful changes, and reverts these changes within minutes or even seconds.
Disagreements between users over the content or appearance of pages may cause edit wars, where competing users repetitively change a page back to a version that they favor. Some wiki software allows administrators to prevent pages from being editable until a decision has been made on what version of the page would be most appropriate.
Some wikis may be subject to external structures of governance which address the behavior of persons with access to the system, for example in academic contexts.
Harmful external links
As most wikis allow the creation of hyperlinks to other sites and services, the addition of malicious hyperlinks, such as sites infected with malware, can also be a problem. For example, in 2006 a German Wikipedia article about the Blaster Worm was edited to include a hyperlink to a malicious website, and users of vulnerable Microsoft Windows systems who followed the link had their systems infected with the worm. Some wiki engines offer a blacklist feature which prevents users from adding hyperlinks to specific sites that have been placed on the list by the wiki's administrators.
Communities
Applications
The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all Web sites in terms of traffic. Other large wikis include the WikiWikiWeb, Memory Alpha, Wikivoyage, and previously Susning.nu, a Swedish-language knowledge base. Medical and health-related wiki examples include Ganfyd, an online collaborative medical reference that is edited by medical professionals and invited non-medical experts. Many wiki communities are private, particularly within enterprises. They are often used as internal documentation for in-house systems and applications. Some companies use wikis to allow customers to help produce software documentation. A study of corporate wiki users found that they could be divided into "synthesizers" and "adders" of content. Synthesizers' frequency of contribution was affected more by their impact on other wiki users, while adders' contribution frequency was affected more by being able to accomplish their immediate work. From a study of thousands of wiki deployments, Jonathan Grudin concluded careful stakeholder analysis and education are crucial to successful wiki deployment.
In 2005, the Gartner Group, noting the increasing popularity of wikis, estimated that they would become mainstream collaboration tools in at least 50% of companies by 2009. Wikis can be used for project management. Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. In the mid-2000s, the increasing trend among industries toward collaboration placed a heavier impetus upon educators to make students proficient in collaborative work, inspiring even greater interest in wikis being used in the classroom.
Wikis have found some use within the legal profession and within the government. Examples include the Central Intelligence Agency's Intellipedia, designed to share and collect intelligence assessments, DKosopedia, which was used by the American Civil Liberties Union to assist with review of documents about the internment of detainees in Guantánamo Bay; and the wiki of the United States Court of Appeals for the Seventh Circuit, used to post court rules and allow practitioners to comment and ask questions. The United States Patent and Trademark Office operates Peer-to-Patent, a wiki to allow the public to collaborate on finding prior art relevant to the examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. Cornell Law School founded a wiki-based legal dictionary called Wex, whose growth has been hampered by restrictions on who can edit.
In academic contexts, wikis have also been used as project collaboration and research support systems.
City wikis
A city wiki or local wiki is a wiki used as a knowledge base and social network for a specific geographical locale. The term city wiki is sometimes also used for wikis that cover not just a city, but a small town or an entire region. Such a wiki contains information about specific instances of things, ideas, people and places. Such highly localized information might be appropriate for a wiki targeted at local viewers, and could include:
Details of public establishments such as public houses, bars, accommodation or social centers
Owner name, opening hours and statistics for a specific shop
Statistical information about a specific road in a city
Flavors of ice cream served at a local ice cream parlor
A biography of a local mayor and other persons
Growth factors
A study of several hundred wikis in 2008 showed that a relatively high number of administrators for a given content size is likely to reduce growth; access controls restricting editing to registered users tends to reduce growth; a lack of such access controls tends to fuel new user registration; and that a higher ratio of administrators to regular users has no significant effect on content or population growth.
Legal environment
Joint authorship of articles, in which different users participate in correcting, editing, and compiling the finished product, can also cause editors to become tenants in common of the copyright, making it impossible to republish without permission of all co-owners, some of whose identities may be unknown due to pseudonymous or anonymous editing. Some copyright issues can be alleviated through the use of an open content license. Version 2 of the GNU Free Documentation License includes a specific provision for wiki relicensing, and Creative Commons licenses are also popular. When no license is specified, an implied license to read and add content to a wiki may be deemed to exist on the grounds of business necessity and the inherent nature of a wiki.
Wikis and their users can be held liable for certain activities that occur on the wiki. If a wiki owner displays indifference and forgoes controls (such as banning copyright infringers) that they could have exercised to stop copyright infringement, they may be deemed to have authorized infringement, especially if the wiki is primarily used to infringe copyrights or obtains a direct financial benefit, such as advertising revenue, from infringing activities. In the United States, wikis may benefit from Section 230 of the Communications Decency Act, which protects sites that engage in "Good Samaritan" policing of harmful material, with no requirement on the quality or quantity of such self-policing. It has also been argued that a wiki's enforcement of certain rules, such as anti-bias, verifiability, reliable sourcing, and no-original-research policies, could pose legal risks. When defamation occurs on a wiki, theoretically, all users of the wiki can be held liable, because any of them had the ability to remove or amend the defamatory material from the "publication". It remains to be seen whether wikis will be regarded as more akin to an internet service provider, which is generally not held liable due to its lack of control over publications' contents, than a publisher. It has been recommended that trademark owners monitor what information is presented about their trademarks on wikis, since courts may use such content as evidence pertaining to public perceptions, and they can edit entries to rectify misinformation.
Conferences
Active conferences and meetings about wiki-related topics include:
Atlassian Summit, an annual conference for users of Atlassian software, including Confluence.
OpenSym (called WikiSym until 2014), an academic conference dedicated to research about wikis and open collaboration.
SMWCon, a bi-annual conference for users and developers of Semantic MediaWiki.
TikiFest, a frequently held meeting for users and developers of Tiki Wiki CMS Groupware.
Wikimania, an annual conference dedicated to the research and practice of Wikimedia Foundation projects like Wikipedia.
Former wiki-related events include:
RecentChangesCamp (2006–2012), an unconference on wiki-related topics.
RegioWikiCamp (2009–2013), a semi-annual unconference on "regiowikis", or wikis on cities and other geographic areas.
See also
Comparison of wiki software
Content management system
CURIE
Dispersed knowledge
List of wikis
Mass collaboration
Universal Edit Button
Wikis and education
Notes
References
Sources
Further reading
External links
Exploring with Wiki, an interview with Ward Cunningham
Murphy, Paula (April 2006). Topsy-turvy World of Wiki. University of California.
Ward Cunningham's correspondence with etymologists
WikiIndex and WikiApiary, directories of wikis
WikiMatrix, a website for comparing wiki software and hosts
Articles containing video clips
Hawaiian words and phrases
Self-organization
Social information processing | Wiki | Mathematics | 5,001 |
31,157,948 | https://en.wikipedia.org/wiki/Serial%20module | In abstract algebra, a uniserial module M is a module over a ring R, whose submodules are totally ordered by inclusion. This means simply that for any two submodules N1 and N2 of M, either or . A module is called a serial module if it is a direct sum of uniserial modules. A ring R is called a right uniserial ring if it is uniserial as a right module over itself, and likewise called a right serial ring if it is a right serial module over itself. Left uniserial and left serial rings are defined in a similar way, and are in general distinct from their right-sided counterparts.
An easy motivating example is the quotient ring for any integer . This ring is always serial, and is uniserial when n is a prime power.
The term uniserial has been used differently from the above definition: for clarification see below.
A partial alphabetical list of important contributors to the theory of serial rings includes the mathematicians Keizo Asano, I. S. Cohen, P.M. Cohn, Yu. Drozd, D. Eisenbud, A. Facchini, A.W. Goldie, Phillip Griffith, I. Kaplansky, V.V Kirichenko, G. Köthe, H. Kuppisch, I. Murase, T. Nakayama, P. Příhoda, G. Puninski, and R. Warfield.
Following the common ring theoretic convention, if a left/right dependent condition is given without mention of a side (for example, uniserial, serial, Artinian, Noetherian) then it is assumed the condition holds on both the left and right. Unless otherwise specified, each ring in this article is a ring with unity, and each module is unital.
Properties of uniserial and serial rings and modules
It is immediate that in a uniserial R-module M, all submodules except M and 0 are simultaneously essential and superfluous. If M has a maximal submodule, then M is a local module. M is also clearly a uniform module and thus is directly indecomposable. It is also easy to see that every finitely generated submodule of M can be generated by a single element, and so M is a Bézout module.
It is known that the endomorphism ring EndR M is a semilocal ring which is very close to a local ring in the sense that EndR M has at most two maximal right ideals. If M is assumed to be Artinian or Noetherian, then EndR M is a local ring.
Since rings with unity always have a maximal right ideal, a right uniserial ring is necessarily local. As noted before, a finitely generated right ideal can be generated by a single element, and so right uniserial rings are right Bézout rings. A right serial ring R necessarily factors in the form where each ei is an idempotent element and eiR is a local, uniserial module. This indicates that R is also a semiperfect ring, which is a stronger condition than being a semilocal ring.
Köthe showed that the modules of Artinian principal ideal rings (which are a special case of serial rings) are direct sums of cyclic submodules. Later, Cohen and Kaplansky determined that a commutative ring R has this property for its modules if and only if R is an Artinian principal ideal ring. Nakayama showed that Artinian serial rings have this property on their modules, and that the converse is not true
The most general result, perhaps, on the modules of a serial ring is attributed to Drozd and Warfield: it states that every finitely presented module over a serial ring is a direct sum of cyclic uniserial submodules (and hence is serial). If additionally the ring is assumed to be Noetherian, the finitely presented and finitely generated modules coincide, and so all finitely generated modules are serial.
Being right serial is preserved under direct products of rings and modules, and preserved under quotients of rings. Being uniserial is preserved for quotients of rings and modules, but never for products. A direct summand of a serial module is not necessarily serial, as was proved by Puninski, but direct summands of finite direct sums of uniserial modules are serial modules.
It has been verified that Jacobson's conjecture holds in Noetherian serial rings.
Examples
Any simple module is trivially uniserial, and likewise semisimple modules are serial modules.
Many examples of serial rings can be gleaned from the structure sections above. Every valuation ring is a uniserial ring, and all Artinian principal ideal rings are serial rings, as is illustrated by semisimple rings.
More exotic examples include the upper triangular matrices over a division ring Tn D, and the group ring for some finite field of prime characteristic p and group G having a cyclic normal p-Sylow subgroup.
Structure
This section will deal mainly with Noetherian serial rings and their subclass, Artinian serial rings. In general, rings are first broken down into indecomposable rings. Once the structure of these rings are known, the decomposable rings are direct products of the indecomposable ones. Also, for semiperfect rings such as serial rings, the basic ring is Morita equivalent to the original ring. Thus if R is a serial ring with basic ring B, and the structure of B is known, the theory of Morita equivalence gives that where P is some finitely generated progenerator B. This is why the results are phrased in terms of indecomposable, basic rings.
In 1975, Kirichenko and Warfield independently and simultaneously published analyses of the structure of Noetherian, non-Artinian serial rings. The results were the same however the methods they used were very different from each other. The study of hereditary, Noetherian, prime rings, as well as quivers defined on serial rings were important tools. The core result states that a right Noetherian, non-Artinian, basic, indecomposable serial ring can be described as a type of matrix ring over a Noetherian, uniserial domain V, whose Jacobson radical J(V) is nonzero. This matrix ring is a subring of Mn(V) for some n, and consists of matrices with entries from V on and above the diagonal, and entries from J(V) below.
Artinian serial ring structure is classified in cases depending on the quiver structure. It turns out that the quiver structure for a basic, indecomposable, Artinian serial ring is always a circle or a line. In the case of the line quiver, the ring is isomorphic to the upper triangular matrices over a division ring (note the similarity to the structure of Noetherian serial rings in the preceding paragraph). A complete description of structure in the case of a circle quiver is beyond the scope of this article, but can be found in . To paraphrase the result as it appears there: A basic Artinian serial ring whose quiver is a circle is a homomorphic image of a "blow-up" of a basic, indecomposable, serial quasi-Frobenius ring.
A decomposition uniqueness property
Two modules U and V are said to have the same monogeny class, denoted , if there exists a monomorphism and a monomorphism . The dual notion can be defined: the modules are said to have the same epigeny class, denoted , if there exists an epimorphism and an epimorphism .
The following weak form of the Krull-Schmidt theorem holds. Let U1, ..., Un, V1, ..., Vt be n + t non-zero uniserial right modules over a ring R. Then the direct sums and are isomorphic R-modules if and only if n = t and there exist two permutations and of 1, 2, ..., n such that and for every i = 1, 2, ..., n.
This result, due to Facchini, has been extended to infinite direct sums of uniserial modules by Příhoda in 2006. This extension involves the so-called quasismall uniserial modules. These modules were defined by Nguyen Viet Dung and Facchini, and their existence was proved by Puninski. The weak form of the Krull-Schmidt Theorem holds not only for uniserial modules, but also for several other classes of modules (biuniform modules, cyclically presented modules over serial rings, kernels of morphisms between indecomposable injective modules, couniformly presented modules.)
Notes on alternate, similar and related terms
Right uniserial rings can also be referred to as right chain rings or right valuation rings. This latter term alludes to valuation rings, which are by definition commutative, uniserial domains. By the same token, uniserial modules have been called chain modules, and serial modules semichain modules. The notion of a catenary ring has "chain" as its namesake, but it is in general not related to chain rings.
In the 1930s, Gottfried Köthe and Keizo Asano introduced the term Einreihig (literally "one-series") during investigations of rings over which all modules are direct sums of cyclic submodules. For this reason, uniserial was used to mean "Artinian principal ideal ring" even as recently as the 1970s. Köthe's paper also required a uniserial ring to have a unique composition series, which not only forces the right and left ideals to be linearly ordered, but also requires that there be only finitely many ideals in the chains of left and right ideals. Because of this historical precedent, some authors include the Artinian condition or finite composition length condition in their definitions of uniserial modules and rings.
Expanding on Köthe's work, Tadashi Nakayama used the term generalized uniserial ring to refer to an Artinian serial ring. Nakayama showed that all modules over such rings are serial. Artinian serial rings are sometimes called Nakayama algebras, and they have a well-developed module theory.
Warfield used the term homogeneously serial module for a serial module with the additional property that for any two finitely generated submodules A and B, where J(−) denotes the Jacobson radical of the module. In a module with finite composition length, this has the effect of forcing the composition factors to be isomorphic, hence the "homogeneous" adjective. It turns out that a serial ring R is a finite direct sum of homogeneously serial right ideals if and only if R is isomorphic to a full n × n matrix ring over a local serial ring. Such rings are also known as primary decomposable serial rings.
Notes
Textbooks
Primary Sources
Module theory
Ring theory | Serial module | Mathematics | 2,295 |
13,674,069 | https://en.wikipedia.org/wiki/Hippocratic%20Oath%20for%20scientists | A Hippocratic Oath for scientists is an oath similar to the Hippocratic Oath for medical professionals, adapted for scientists. Multiple varieties of such an oath have been proposed. Joseph Rotblat has suggested that an oath would help make new scientists aware of their social and moral responsibilities; opponents, however, have pointed to the "very serious risks for the scientific community" posed by an oath, particularly the possibility that it might be used to shut down certain avenues of research, such as stem cells.
Development
The idea of an oath has been proposed by various prominent members of the scientific community, including Karl Popper, Joseph Rotblat and John Sulston. Research by the American Association for the Advancement of Science (AAAS) identified sixteen different oaths for scientists or engineers proposed during the 20th century, most after 1970.
Popper, Rotblat and Sulston were all primarily concerned with the ethical implications of scientific advances, in particular for Popper and Rotblat the development of the atomic bomb, and believed that scientist, like medics, should have an oath that compelled them to "first do no harm". Popper said: "Formerly the pure scientist or the pure scholar had only one responsibility beyond those which everybody has; that is, to search for the truth. … This happy situation belongs to the past." Rotblat similarly stated: "Scientists can no longer claim that their work has nothing to do with the welfare of the individual or with state policies." He also attacked the attitude that the only obligation of a scientist is to make their results known, the use made of these results being the public's business, saying: "This amoral attitude is in my opinion actually immoral, because it eschews personal responsibility for the likely consequences of one's actions." Sulston was more concerned with rising public distrust of scientists and conflicts of interest brought about by the exploitation of research for profit. The stated intention of his oath was "both to require qualified scientists to cause no harm and to be wholly truthful in their public pronouncements, and also to protect them from discrimination by employers who might prefer them to be economical with the truth."
The concept of an oath, rather than a more detailed code of conduct, has been opposed by Ray Spier, Professor of Science and Engineering Ethics at the University of Surrey, UK, who stated that "Oaths are not the way ahead". Other objections raised at a AAAS meeting on the topic in 2000 included that an oath would simply make scientists look good without changing behaviour, that an oath could be used to suppress research, that some scientists would refuse to swear any oath as a matter of principle, that an oath would be ineffective, that creation of knowledge is separate from how it is used, and that the scientific community could never agree on the content of an oath. The meeting concluded that: "There was a broadly shared consensus that a tolerant (but not patronizing) attitude should be taken towards those developing oaths, but that an oath posed very serious risks for the scientific community which could not be ignored." Nobel laureate Jean-Marie Lehn has said "The first aim of scientific research is to increase knowledge for understanding. Knowledge is then available to mankind for use, namely to progress as well as to help prevent disease and suffering. Any knowledge can be misused. I do not see the need for an oath".
Some of the propositions are outlined below.
Karl Popper
In 1968, the philosopher Karl Popper gave a talk on "The Moral Responsibility of the Scientist" at the International Congress on Philosophy in Vienna, in which he suggested "an undertaking analogous to the Hippocratic oath". In his analysis he noted that the original oath had three sections: the apprentice's obligation to their teacher; the obligation to carry on the high tradition of their art, preserve its high standards, and pass these standards on to their own students; and the obligation to help the suffering and preserve their confidentiality. He also noted that it was an apprentice's oath, as distinct from a graduation oath. Based on this, he proposed a three-section oath for students, rearranged from the Hippocratic oath to give professional responsibility to further the growth of knowledge; the student, who owes respect to others engaged in science and loyalty to teachers; and the overriding loyalty owed to humanity as a whole.
Joseph Rotblat
The idea of a Hippocratic Oath for scientists was raised again by Joseph Rotblat in his acceptance speech for the Nobel Peace Prize in 1995, who later expanded on the idea, endorsing the formulation of the Student Pugwash Group:
John Sulston
In 2001, in the scientific journal Biochemical Journal, Nobel laureate John Sulston proposed that "For individual scientists, it may be helpful to have a clear professional code of conduct – a Hippocratic oath as it were". This path would enable scientists to declare their intention "to cause no harm and to be wholly truthful in their public pronouncements", and would also serve to protect them from unethical employers. The concept of an oath was opposed by Ray Spiers of the University of Surrey, an expert on scientific ethics who was preparing a 20-point code of conduct at the time.
David King
In 2007, the UK government's chief scientific advisor, David King, presented a "Universal Ethical Code for Scientists" at the British Association's Festival of Science in York. Despite being a code rather than an oath, this was widely reported as a Hippocratic oath for scientists. In contrast to the earlier oaths, King's code was not only intended to meet the public demand that "scientific developments are ethical and serve the wider public good" but also to address public confidence in the integrity of science, which had been shaken by the disgrace of cloning pioneer Hwang Woo-suk and by other research-fraud scandals.
Work on the code started in 2005, following a meeting of G8 science ministers and advisors. It was supported by the Royal Society in its response to a public consultation on the draft code in 2006, where they said it would help whistleblowers and the promotion of science in schools.
The code has seven principles, divided into three sections:
See also
Code of conduct
Code of ethics
Universal code (ethics)
References
External links
Transcript of a Conversation with Sir David King, 2007;
Institute of Medical Science, Toronto, 2008 ;
Ethics of science and technology
Oaths | Hippocratic Oath for scientists | Technology | 1,316 |
27,976,717 | https://en.wikipedia.org/wiki/Otto%20Ambros | Otto Ambros (19 May 1901 – 23 July 1990) was a German chemist and Nazi war criminal. He is known for his wartime work on synthetic rubber (polybutadiene, or "Buna rubber") and nerve agents (sarin and tabun). After the war he was tried at Nuremberg and convicted of crimes against humanity for his use of slave labor from the Auschwitz III–Monowitz concentration camp. In 1948 he was sentenced to 8 years' imprisonment, but released early in 1951 for good behavior.
Early life
The son of a university professor, Ambros attended school and passed his Abitur exam in Munich. In 1920 he went to the University of Munich to study chemistry and agricultural science. In 1925 he gained a doctorate, studying under the 1915 Nobel Prize for Chemistry winner, Richard Willstätter.
IG Farben and Nazi activities
Beginning in 1926, Ambros worked at BASF in Ludwigshafen. In 1930, he spent a year studying in the Far East. From 1934, he worked at IG Farben, becoming head of their Schkopau plant in 1935. His division of IG Farben developed chemical weapons, including the nerve agents sarin (in 1938) and soman (in 1944). In this capacity, Ambros was an advisor to Carl Krauch, a company executive. The name sarin is an acronym of the initials of the discoverers, with Ambros being the "a".
Ambros then managed the IG Farben factories at Dyhernfurth, which produced tabun (a nerve agent similar to sarin), and at Gendorf, which produced mustard gas (a poison gas originally developed and used in World War I). The Dyhernfurth factory included a slave labor concentration camp with about 3000 prisoners who were used for the most hard and dangerous work at the plant, and as human test subjects in nerve gas experiments.
At IG Farben, Ambros also helped research how to produce polybutadiene rubber, which they gave the trade name "Buna rubber" because it is made using butadiene and sodium (Na). This was an important project because the war cut off Germany from raw materials for natural rubber, and in June 1944 Ambros was awarded a prize of one million marks by Adolf Hitler in recognition of this work. In 1941 Ambros selected the site for the Monowitz concentration camp and the Buna Werke factory, which produced Buna rubber using slave labor from the Auschwitz camp, and he then spent the rest of the war serving as plant manager of Buna-Werk IV and managing director of the synthetic fuel production facility at IG Auschwitz.
In 1944 Ambros was awarded the Knight's Cross of War Merit Cross.
Monowitz
Ambros was arrested by the US Army in 1946. At the IG Farben trial in Nuremberg in 1948, Ambros and 23 other IG Farben executives were charged with waging wars of aggression; plunder and spoliation; and slave labor and mass murder. He was found guilty on the slave-labor count only, for his role overseeing the IG Buna Werke rubber plant at Monowitz, and sentenced to eight years' confinement. Ultimately he was released early from Landsberg Prison in 1951.
Monowitz was built as an Arbeitslager (workcamp); it also contained an "Arbeitsausbildungslager" (Labor Education Camp) for non-Jewish prisoners perceived not up to par with German work standards. It held approximately 12,000 prisoners, the great majority of whom were Jewish, in addition to non-Jewish criminals and political prisoners. Prisoners from Monowitz were leased out by the SS to IG Farben to labor at the Buna Werke, a collection of chemical factories including those used to manufacture Buna (synthetic rubber) and synthetic oil. The SS charged IG Farben three Reichsmarks (RM) per day for unskilled workers, four RM per hour for skilled workers, and one and one-half RM for children. By 1942 the new labour camp complex for IG Farben prisoners occupied about half of its projected area, and the expansion was for the most part finished in the summer of 1943. The last 4 barracks were built a year later. The labour camp's population grew from 3,500 in December 1942 to over 6,000 by the first half of 1943. By July 1944 the prisoner population was over 11,000, most of whom were Jews. Despite the increasing death-rate from slave labour, starvation, executions or other forms of murder, the demand for labour was growing, and more prisoners were brought in. Because the factory management insisted on removing sick and exhausted prisoners from Monowitz, people incapable of continuing their work were murdered at the death camp at Birkenau nearby. The company argued that they had not spent large amounts of money building barracks for prisoners unfit to work. The Buna camp was described in the writings of Primo Levi, the Italian Jewish chemist and Auschwitz survivor.
Release from prison
Otto Ambros was released from prison in 1951 due to good behaviour. He became an adviser to chemical companies such as W. R. Grace, Dow Chemical, as well as the U.S. Army Chemical Corps, and Konrad Adenauer. He was also advising Chemie Grünenthal (now Grünenthal GmbH) in the development of thalidomide.
References
External links
Otto Ambros Toxipedia/ toxipedia.org
1901 births
1990 deaths
20th-century Freikorps personnel
Scientists from Munich
Ludwig Maximilian University of Munich alumni
20th-century German chemists
German organic chemists
Scientists from the Kingdom of Bavaria
IG Farben people
BASF people
German chemical industry people
Weapons scientists and engineers
Nazi Party members
Nazis convicted of war crimes
German people convicted of crimes against humanity
People convicted by the United States Nuremberg Military Tribunals
Recipients of the Knights Cross of the War Merit Cross | Otto Ambros | Chemistry | 1,234 |
77,359,208 | https://en.wikipedia.org/wiki/Asplenium%20aethiopicum | Asplenium aethiopicum is a lithophytic or sometimes epiphytic species of fern found in Southern and tropical Africa, tropical America, Asia and Australia. It is listed as critically endangered in Victoria, Australia. It is considered exotic in New Zealand.
Taxonomy
Asplenium aethiopicum was originally published under the name Trichomanes aethiopicum by Nicolaas Laurens Burman in 1768. In 1935 Alfred Becherer moved this taxon to the genus Asplenium creating the name Asplenium aethiopicum.
References
aethiopicum
Flora of South Africa
Flora of Zimbabwe
Flora of Western Australia
Flora of Queensland
Flora of New South Wales
Flora of Victoria (state)
Flora of New Guinea
Flora of India
Flora of Saudi Arabia
Flora of Zambia
Flora of the Democratic Republic of the Congo
Flora of Angola
Flora of Burundi
Flora of the Central African Republic
Flora of Eritrea
Flora of Ethiopia
Flora of Fiji
Plants described in 1935
Lithophytes
Epiphytes | Asplenium aethiopicum | Biology | 200 |
39,326,898 | https://en.wikipedia.org/wiki/Cold-water%20geyser | Cold-water geysers are geysers that have eruptions whose water spurts are propelled by -bubbles, instead of the hot steam which drives the more familiar hot-water geysers: The gush of a cold-water geyser is identical to the spurt from a freshly-opened bottle of soda pop.
Cold-water geysers look quite similar to their steam-driven counterparts; however, their -laden water often appears whiter and more frothy.
Mechanism
In cold-water geysers, the supply of -laden water lies confined in an aquifer, in which water and are trapped by less permeable overlying strata. The more familiar hot-water geysers derive the energy for their eruptons from the proximity to (relatively) near-surface magma. In contrast, whereas cold water geysers might also derive their supply of from magmatic sources, by definition of "cold-water", they do not also obtain sufficient heat to provide steam pressure, and their eruptions are propelled only by the pressure of dissolved . The magnitude and frequency of such eruptions depend on various factors such as plumbing depth, concentrations and refresh rate, aquifer water yield, etc.
The water and its load of powering a cold-water geyser can escape the rock strata overlying its aquifer only through weak segments of rock, like faults, joints, or drilled wells. A borehole drilled for a well, for example, can unexpectedly provide an escape route for the pressurized water and to reach the surface.
The column of water rising through the rock exerts enough pressure on the gaseous so that it remains in the water as dissolved gas or small bubbles. When the pressure decreases due to the widening of a fissure, the bubbles expand, and that expansion displaces the water above and causes the eruption.
Examples
Herľany, Slovakia
Andernach Geyser (a.k.a. Namedyer Sprudel), Germany
Wallender Born (a.k.a. Brubbel), Germany
Crystal Geyser, near Green River, Utah
Caxambu, Brazil
Sivá Brada, Slovakia
Wehr Geyser Germany
References
Springs (hydrology)
Bodies of water | Cold-water geyser | Environmental_science | 465 |
463,574 | https://en.wikipedia.org/wiki/TSMC | Taiwan Semiconductor Manufacturing Company Limited (TSMC or Taiwan Semiconductor) is a Taiwanese multinational semiconductor contract manufacturing and design company. It is the world's most valuable semiconductor company, the world's largest dedicated independent ("pure-play") semiconductor foundry, and Taiwan's largest company, with headquarters and main operations located in the Hsinchu Science Park in Hsinchu, Taiwan. Although the central government of Taiwan is the largest individual shareholder, the majority of TSMC is owned by foreign investors. In 2023, the company was ranked 44th in the Forbes Global 2000. Taiwan's exports of integrated circuits amounted to $184 billion in 2022, accounted for nearly 25 percent of Taiwan's GDP. TSMC constitutes about 30 percent of the Taiwan Stock Exchange's main index.
TSMC was founded in Taiwan in 1987 by Morris Chang as the world's first dedicated semiconductor foundry. It has long been the leading company in its field. When Chang retired in 2018, after 31 years of TSMC leadership, Mark Liu became chairman and C. C. Wei became Chief Executive. It has been listed on the Taiwan Stock Exchange since 1993; in 1997 it became the first Taiwanese company to be listed on the New York Stock Exchange. Since 1994, TSMC has had a compound annual growth rate (CAGR) of 17.4% in revenue and a CAGR of 16.1% in earnings.
Most fabless semiconductor companies such as AMD, Apple, ARM, Broadcom, Marvell, MediaTek, Qualcomm, and Nvidia are customers of TSMC, as are emerging companies such as Allwinner Technology, HiSilicon, Spectra7, and UNISOC. Programmable logic device companies Xilinx and previously Altera also make or made use of TSMC's foundry services. Some integrated device manufacturers that have their own fabrication facilities, such as Intel, NXP, STMicroelectronics, and Texas Instruments, outsource some of their production to TSMC. At least one semiconductor company, LSI, re-sells TSMC wafers through its ASIC design services and design IP portfolio.
TSMC has a global capacity of about thirteen million 300 mm-equivalent wafers per year as of 2020 and produces chips for customers with process nodes from 2 microns to 3 nanometres. TSMC was the first foundry to market 7-nanometre and 5-nanometre (used by the 2020 Apple A14 and M1 SoCs, the MediaTek Dimensity 8100, and AMD Ryzen 7000 series processors) production capabilities, and the first to commercialize ASML's extreme ultraviolet (EUV) lithography technology in high volume.
History
In 1986, Li Kwoh-ting, representing the Executive Yuan, invited Morris Chang to serve as the president of the Industrial Technology Research Institute (ITRI) and offered him a blank check to build Taiwan's chip industry. At that time, the Taiwanese government wanted to develop its semiconductor industry, but its high investment and high risk nature made it difficult to find investors. Texas Instruments and Intel turned down Chang. Only Philips was willing to sign a joint venture contract with Taiwan to put up $58 million, transfer its production technology, and license intellectual property in exchange for a 27.5 percent stake in TSMC. Alongside generous tax benefits, the Taiwanese government, through the National Development Fund, Executive Yuan, provided another 48 percent of the startup capital for TSMC, and the rest of the capital was raised from several of the island's wealthiest families, who owned firms that specialized in plastics, textiles, and chemicals. These wealthy Taiwanese were directly "asked" by the government to invest. "What generally happened was that one of the ministers in the government would call a businessman in Taiwan," Chang explained, "to get him to invest." From day one, TSMC was not really a private business: it was a project of the Taiwanese state. Its first CEO was James E. Dykes, who left after a year and Morris Chang became the CEO.
Since then, the company has continued to grow, albeit subject to the cycles of demand. In 2011, the company planned to increase research and development expenditures by almost 39% to NT$50 billion to fend off growing competition. The company also planned to expand capacity by 30% in 2011 to meet strong market demand. In May 2014, TSMC's board of directors approved capital appropriations of US$568 million to increase and improve manufacturing capabilities after the company forecast higher than expected demand. In August 2014, TSMC's board of directors approved additional capital appropriations of US$3.05 billion.
In 2011, it was reported that TSMC had begun trial production of the A5 SoC and A6 SoCs for Apple's iPad and iPhone devices. According to reports, in May 2014 Apple sourced its A8 and A8X SoCs from TSMC. Apple then sourced the A9 SoC with both TSMC and Samsung (to increase volume for iPhone 6S launch) and the A9X exclusively with TSMC, thus resolving the issue of sourcing a chip in two different microarchitecture sizes. As of 2014, Apple was TSMC's most important customer. In October 2014, ARM and TSMC announced a new multi-year agreement for the development of ARM based 10 nm FinFET processors.
Over the objection of the Tsai Ing-wen administration, in March 2017, TSMC invested US$3 billion in Nanjing to develop a manufacturing subsidiary there.
In 2020, TSMC became the first semiconductor company in the world to sign up for the RE100 initiative, pledging to use 100% renewable energy by 2050. TSMC accounts for roughly 5% of the energy consumption in Taiwan, even exceeding that of the capital city Taipei. This initiative was thus expected to accelerate the transformation to renewable energy in the country. For 2020, TSMC had a net income of US$17.60 billion on a consolidated revenue of US$45.51 billion, an increase of 57.5% and 31.4% respectively from the 2019 level of US$11.18 billion net income and US$34.63 billion consolidated revenue. Its market capitalization was over $550 billion in April 2021. TSMC's revenue in the first quarter of 2020 reached US$10 billion, while its market capitalization was US$254 billion. TSMC's market capitalization reached a value of NT$1.9 trillion (US$63.4 billion) in December 2010. It was ranked 70th in the FT Global 500 2013 list of the world's most highly valued companies with a capitalization of US$86.7 billion, while reaching US$110 billion in May 2014. In March 2017, TSMC's market capitalization surpassed that of semiconductor giant Intel for the first time, hitting NT$5.14 trillion (US$168.4 billion), with Intel's at US$165.7 billion. On 27 June 2020, TSMC briefly became the world's 10th most valuable company, with a market capitalization of US$410 billion.
To mitigate business risks in the event of war between Taiwan and the People's Republic of China, since the beginning of the 2020s, TSMC has expanded its geographic operations, opening new fabs in Japan and the United States, with further plans for expansion into Germany. In July 2020, TSMC confirmed it would halt the shipment of silicon wafers to Chinese telecommunications equipment manufacturer Huawei and its subsidiary HiSilicon by 14 September. In November 2020, officials in Phoenix, Arizona in the United States approved TSMC's plan to build a $12 billion chip plant in the city. The decision to locate a plant in the US came after the Trump administration warned about the issues concerning the world's electronics made outside of the U.S. In 2021, news reports claimed that the facility might be tripled to roughly a $35 billion investment with six factories. See for more details.
In June 2021, following nearly a year of public controversy surrounding its COVID-19 vaccine shortage, with only about 10% of its 23.5 million population vaccinated; Taiwan agreed to allow TSMC and Foxconn to jointly negotiate purchasing COVID-19 vaccines on its behalf. In July 2021, BioNTech's Chinese sales agent Fosun Pharma announced that the two technology manufacturers had reached an agreement to purchase 10 million BioNTech COVID-19 vaccines from Germany. TSMC and Foxconn pledged to each buy five million doses for up to $175 million, for donation to Taiwan's vaccination program.
Due to the 2020–2023 global semiconductor shortage, Taiwanese competitor United Microelectronics raised prices approximately 7–9 percent, and prices for TSMC's more mature processors will be raised by about 20 percent. In November 2021, TSMC and Sony announced that TSMC would be establishing a new subsidiary named (JASM) in Kumamoto, Japan. The new subsidiary will manufacture 22- and 28-nanometer processes. The initial investment will be approximately $7 billion, with Sony investing approximately $500 million for a less than 20% stake. Construction of the fabrication plant is expected to start in 2022, with production targeted to begin two years later in 2024.
In February 2022, TSMC, Sony Semiconductor Solutions, and Denso announced that Denso would take a more than 10% equity stake in JASM with a US$0.35 billion investment, amid a scarcity of chips for automobiles. TSMC will also enhance JASM's capabilities with 12/16 nanometer FinFET process technology in addition to the previously announced 22/28 nanometer process and increase monthly production capacity from 45,000 to 55,000 12-inch wafers. The total capital expenditure for JASM's Kumamoto fab is estimated to be approximately US$8.6 billion. The Japanese government wants JASM to supply essential chips to Japan's electronic device makers and auto companies as trade friction between the United States and China threatens to disrupt supply chains. The fab is expected to directly create about 1,700 high-tech professional jobs.
In July 2022, TSMC announced the company had posted a record profit in the second quarter, with net income up 76.4 percent year-over-year. The company saw steady growth in the automotive and data center sectors with some weakness in the consumer market. Some of the capital expenditures are projected to be pushed up to 2023. In the third quarter of 2022, Berkshire Hathaway disclosed purchase of 60 million shares in TSMC, acquiring a $4.1 billion stake, making it one of its largest holdings in a technology company. However, Berkshire sold off 86.2% of its stake by the next quarter citing geopolitical tensions as a factor. In February 2024, TSMC shares hit a record high, with the high on the trading day reaching NT$709 and closing at NT$697 (+8%). This was influenced by the increase in the price target on chip designer Nvidia. TSMC currently manufactures 3-nanometer chips and plans to start 2-nanometer mass production in 2025. It is included in the FTSE4Good Index, being the only Asian company in the top ten.
In October 2024, TSMC informed the United States Department of Commerce about a potential breach of export controls in which one of its most advanced chips was sent to Huawei via another company with ties to the Chinese government.
Patent dispute with GlobalFoundries
On 26 August 2019, GlobalFoundries filed several patent infringement lawsuits against TSMC in the US and Germany claiming that TSMC's 7 nm, 10 nm, 12 nm, 16 nm, and 28 nm nodes infringed 16 of their patents. GlobalFoundries named twenty defendants. TSMC said that they were confident that the allegations were baseless. On 1 October 2019, TSMC filed patent infringement lawsuits against GlobalFoundries in the US, Germany and Singapore, claiming that GlobalFoundries' 12 nm, 14 nm, 22 nm, 28 nm and 40 nm nodes infringed 25 of their patents. On 29 October 2019, TSMC and GlobalFoundries announced a resolution to the dispute, agreeing to a life-of-patents cross-license for all of their existing semiconductor patents and new patents for the next 10 years.
Corporate affairs
Senior leadership
Chairman: C. C. Wei (since June 2024)
Chief Executive: C. C. Wei (since June 2018)
List of former chairmen
Morris Chang (1987–2018)
Mark Liu (2018–2024)
List of former chief executives
Morris Chang (1987–2005)
Rick Tsai (2005–2009)
Morris Chang (2009–2013); second term
C. C. Wei and Mark Liu (2013–2018); co-CEO's
Business trends
The key trends for TSMC are (as of the financial year ending December 31):
TSMC and the rest of the foundry industry are exposed to the cyclical industrial dynamics of the semiconductor industry. TSMC must ensure its production capacity to meet strong customer demand during upturns; however, during downturns, it must contend with excess capacity because of weak demand and the high fixed costs associated with its manufacturing facilities. As a result, the company's financial results tend to fluctuate with a cycle time of a few years. This is more apparent in earnings than revenues because of the general trend of revenue and capacity growth. TSMC's business has generally also been seasonal, with a peak in Q3 and a low in Q1.
In 2014, TSMC was at the forefront of the foundry industry for high-performance, low-power applications, leading major smartphone chip companies, such as Qualcomm, Mediatek, and Apple, to place an increasing amount of orders. While the competitors in the foundry industry (primarily GlobalFoundries and United Microelectronics Corporation) have encountered difficulties ramping leading-edge 28 nm capacity, the leading Integrated Device Manufacturers such as Samsung and Intel that seek to offer foundry capacity to third parties were also unable to match the requirements for advanced mobile applications.
For most of 2014, TSMC saw a continuing increase in revenues due to increased demand, primarily due to chips for smartphone applications. TSMC raised its financial guidance in March 2014 and posted 'unseasonably strong' first-quarter results. For Q2 2014, revenues came in at NT$183 billion, with 28 nm technology business growing more than 30% from the previous quarter. Lead times for chip orders at TSMC increased due to a tight capacity situation, putting fabless chip companies at risk of not meeting their sales expectations or shipment schedules, and in August 2014 it was reported that TSMC's production capacity for the fourth quarter of 2014 was already almost fully booked, a scenario that had not occurred for many years, which was described as being due to a ripple-effect due to TSMC landing CPU orders from Apple.
However, monthly sales for 2014 peaked in October, decreasing by 10% in November due to cautious inventory adjustment actions taken by some of its customers. TSMC's revenue for 2014 saw growth of 28% over the previous year, while TSMC forecasted that revenue for 2015 would grow by 15 to 20 percent from 2014, thanks to strong demand for its 20 nm process, new 16 nm FinFET process technology as well as continuing demand for 28 nm, and demand for less advanced chip fabrication in its 200mm fabs.
In 2019, TSMC was ranked fourth in the MEMS field, behind leader Silex Microsystems. In 2021, TSMC was ranked third in the MEMS field.
Ownership
Around 56% of TSMC shares are held by the general public and around 38% are held by institutions. The largest shareholders in early 2024 were:
National Development Fund, Executive Yuan (6.38%)
BlackRock (5.09%)
Capital Research and Management Company (3.61%)
Government of Singapore Investment Corporation (3.32%)
Norges Bank (1.59%)
Fidelity Investments (1.37%)
New Labor Pension Scheme (1.28%)
The Vanguard Group (1.26%)
Yuanta Securities Investment (1.02%)
JPMorgan Chase (0.83%)
Fidelity International (0.8%)
Baillie Gifford (0.76%)
Fubon Life Insurance (0.75%)
Invesco (0.63%)
Technologies
TSMC's N7+ is the first commercially available extreme-ultraviolet lithographic process in the semiconductor industry. It uses ultraviolet patterning and enables more acute circuits to be implemented on the silicon. N7+ offers a 15–20% higher transistor density and 10% reduction in power consumption than previous technology. The N7 achieved the fastest ever volume time to market, faster than 10 nm and 16 nm. The N5 iteration doubles transistor density and improves performance by an additional 15%.
Production capabilities
On 300 mm wafers, TSMC has silicon lithography on node sizes:
0.13 μm (130 nm, options: general-purpose (G), low-power (LP), high-performance low-voltage (LV))
90 nm (based upon 80GC from Q4/2006)
65 nm (options: general-purpose (GP), low-power (LP), ultra-low power (ULP, LPG))
55 nm (options: general-purpose (GP), low-power (LP))
40 nm (options: general-purpose (GP), low-power (LP), ultra-low power (ULP))
28 nm (options: high-performance (HP), high-performance mobile (HPM), high-performance computing (HPC), high-performance low-power (HPL), low-power (LP), high-performance computing Plus (HPC+), ultra-low power (ULP) with HKMG)
22 nm (options: ultra-low power (ULP), ultra-low leakage (ULL))
20 nm
16 nm (options: FinFET (FF), FinFET Plus (FF+), FinFET Compact (FFC))
12 nm (options: FinFET Compact (FFC), FinFET Nvidia (FFN)), enhanced version of 16 nm process
10 nm (options: FinFET (FF))
7 nm (options: FinFET (FF), FinFET Plus (FF+), FinFET Pro (FFP), high-performance computing (HPC))
6 nm (options: FinFET (FF), risk production started in Q1 2020, enhanced version of 7 nm process)
5 nm (options: FinFET (FF))
4 nm (options: FinFET (FF), risk production started in 2021, enhanced version of 5 nm process)
3 nm (options: FinFET (FF), volume production started in Q4 2022)
It also offers "design for manufacturing" (DFM) customer services. In press publications, these processes will often be referenced, for example, for the mobile variant, simply by 7nmFinFET or even more briefly by 7FF. At the beginning of 2019, TSMC was advertising N7+, N7, and N6 as its leading edge technologies. As of June 2020, TSMC is the manufacturer selected for production of Apple's 5 nanometer ARM processors, as "the company plans to eventually transition the entire Mac lineup to its Arm-based processors, including the priciest desktop computers". In July 2020, TSMC signed a 20-year deal with Ørsted to buy the entire production of two offshore wind farms under development off Taiwan's west coast. At the time of its signing, it was the world's largest corporate green energy order ever made. In July 2021, both Apple and Intel were reported to be testing their proprietary chip designs with TSMC's 3 nm production.
Facilities
Arizona
In 2020, TSMC announced a planned fab in Phoenix, Arizona, intended to begin production by 2024 at a rate of 20,000 wafers per month. As of 2020, TSMC announced that it would bring its newest 5 nm process to the Arizona facility, a significant break from its prior practice of limiting US fabs to older technologies. The Arizona plant was estimated to not be fully operational until 2024, when the 5 nm process is projected to be replaced by TSMC's 3 nm process as the latest technology. At launch it will be the most advanced fab in the United States. TSMC plans to spend $12 billion on the project over eight years, beginning in 2021. TSMC claimed the plant will create 1,900 full-time jobs.
In December 2022, TSMC announced its plans to triple its investment in the Arizona plants in response to the growing tensions between the US and China and the supply chain disruption that has led to chip shortages. In that same month, TSMC stated that they were running into major cost issues, because the cost of construction of buildings and facilities in the US is four to five times what an identical plant would cost in Taiwan, (due to higher costs of labor, red tape, and training), as well as difficulty finding qualified personnel (for which it has hired US workers and sent them for training in Taiwan for 12–18 months.) These additional production costs will increase the cost of TSMC's chips made in the US to at least 50% more than the cost of chips made in Taiwan. In July 2023 TSMC warned that US talent was insufficient, so Taiwanese workers will need to be brought in for a limited time, and that the chip factory won't be operational until 2025. In September 2023, an analyst said the chips will still need to be sent back to Taiwan for packaging. In January 2024, TSMC chairman Liu again warned that Arizona lacked workers with the specialized skills to hire and that TSMC's second Arizona plant likely won't start volume production of advanced chips until 2027 or 2028.
In April 2024, the US Commerce Department agreed to provide $6.6 billion in direct funding and up to $5 billion in loans to TSMC for the purposes of creating semiconductor manufacturing facilities in Arizona. This action falls under the CHIPS and Science Act and is intended to boost domestic chip production for the USA.
Halo Vista
In October 2024 it was revealed that development around the TSMC plants would be called Halo Vista, that will develop 3,500 acres of property from restaurants, hotels, housing, and other Mixed-use development. There will also be a Sonoran Oasis Research and Technology Park that will also help set up the supply chain and foster innovative development, much like how Hsinchu Science Park is to TSMC in Taiwan. As many as 6 fabrication plants could be built there worth a total of around $120 billion.
Central Taiwan Science Park
The investment of US$9.4 billion to build its third 300mm wafer fabrication facility in Central Taiwan Science Park (Fab 15) was originally announced in 2010. The facility was expected to manufacture over 100,000 wafers a month and generate US$5 billion per year of revenue. TSMC has continued to expand advanced 28 nm manufacturing capacity at Fab 15. On 12 January 2011, TSMC announced the acquisition of land from Powerchip Semiconductor for NT$2.9 billion (US$96 million) to build two additional 300mm fabs (Fab 12B) to cope with increasing global demand.
WaferTech subsidiary
WaferTech, a subsidiary of TSMC, is a pure-play semiconductor foundry based in Camas, Washington, outside Portland, Oregon. The WaferTech campus contains a complex housed on , with a main fabrication facility consisting of a 200mm wafer fabrication plant. The site is the second-largest pure-play foundry in the United States, employing 1,100 workers. The largest is GlobalFoundries Fab 8 in Malta, New York, which employs over 3,000 workers with over under one roof. As of 2024, the facility supports node sizes of 0.35, 0.30, 0.25, 0.22, 0.18, and 0.16 micrometers, with an emphasis on embedded flash process technology.
History
WaferTech was established in June 1996 as a joint venture with TSMC, Altera, Analog Devices, and ISSI as key partners. The four companies and minor individual investors placed US$1.2 billion into this venture, which was at the time the single largest startup investment in the state of Washington. The company started production in July 1998 in its 200mm semiconductor fabrication plant. Its first product was a 0.35 micrometer part for Altera. TSMC bought out the joint venture partners in 2000 and acquired full control, operating it as a fully owned subsidiary. In 2015, Tsung Kuo was named company president and fab director of WaferTech.
Japan
In November 2021, TSMC and Sony announced that TSMC would be establishing a new subsidiary named (JASM) in Kumamoto, Japan. Denso and Toyota have also invested in the company and are minor shareholders.
The first factory (Fab 23) in Kikuyo, Kumamoto, began commercial operations in December 2024 and produces 12-, 22-, and 28-nanometer processes. Fab 23 cost US$8.6 billion to build, with 476 billion yen subsidised by the Ministry of Economy, Trade and Industry (METI).
The second factory, currently under construction adjacent to Fab 23 as of January 2025, will produce 6-nanometer and 12-nanometer processes. This factory is estimated to cost US$13.9 billion, with 732 billion yen funded by the METI.
Germany
In August 2023, TSMC committed €3.5 billion to a €10+ billion factory in Dresden, Germany. The plant is subsidised with €5 billion from the German government. Three European companies (Robert Bosch GmbH, Infineon Technologies, and NXP Semiconductors) invested in the plant in return for a 10% share each. The resulting joint venture with TSMC is named European Semiconductor Manufacturing Company (ESMC). The factory is planned to be fully operational in 2029 with a monthly capacity of 40,000 12-inch wafers.
See also
List of companies of Taiwan
List of semiconductor fabrication plants
Moore's law
Quantum tunnelling
Semiconductor device fabrication
Semiconductor industry in Taiwan
Very Large Scale Integration
References
External links
Electronics companies established in 1987
Taiwanese companies established in 1987
1993 initial public offerings
Manufacturing companies based in Hsinchu
Companies listed on the Taiwan Stock Exchange
Electronics companies of Taiwan
Semiconductor companies of Taiwan
Foundry semiconductor companies
Taiwanese brands
Technology companies of Taiwan
Companies listed on the New York Stock Exchange
Computer companies of Taiwan
Computer hardware companies
Companies in the Taiwan Capitalization Weighted Stock Index
Companies in the Dow Jones Global Titans 50
Companies in the S&P Asia 50
MEMS factories | TSMC | Materials_science,Technology | 5,599 |
49,119,569 | https://en.wikipedia.org/wiki/Comparison%20of%20deep%20learning%20software | The following tables compare notable software frameworks, libraries, and computer programs for deep learning applications.
Deep learning software by name
Comparison of machine learning model compatibility
See also
Comparison of numerical-analysis software
Comparison of statistical packages
Comparison of cognitive architectures
List of datasets for machine-learning research
List of numerical-analysis software
References
Deep learning frameworks | Comparison of deep learning software | Mathematics | 70 |
47,124,135 | https://en.wikipedia.org/wiki/Fargam | Fargam (Auspicious in Persian) was a male Rhesus macaque monkey launched into space by Iran. This was Iran's second successful and third overall attempt at launching a monkey into space; their first attempt in 2011 had failed as the animal died in space. The news was released by Iranian state TV which also showed footage of Fargam strapped inside the rocket. Iranian president Hassan Rouhani congratulated Iranian scientists afterwards, touting it as a "long step in getting the Islamic Republic of Iran closer to sending a man into space".
Technology
Fargam was launched inside a Pishgam capsule aboard a Kavoshgar booster, both of which were developed and produced domestically by Iranian scientists and engineers. The rocket was reported to have reached an altitude of 120 km (75 miles) before the capsule was parachuted down. The whole mission was reported to have lasted 15 minutes.
Skepticism
The launch was only announced after the completion of the mission. This, combined with a lack of immediate visual confirmation, resulted in skeptics claiming that Fargam had died during the flight. When Iran finally released footage of the launch, observers noted that the monkey in the capsule was different to Fargam, with darker hair and a prominent red mole over its left eye. The monkey displayed in the video of the launch was actually the monkey that had died in the earlier 2011 mission. Mohammad Ebrahimi, a spokesmen for the Iranian Space Agency, claimed that the team in charge of assembling promotional material accidentally used one of Fargam's backups for all the promotional material. Iran has always denied that the 2011 launch and the death of its rhesus monkey pilot ever took place. Jonathan McDowell, a Canadian astronomer at Harvard who tracks rocket launches, confirmed that the monkey seen in the promotional material was the animal that had died in 2011, and that there was no reason to believe Fargam's flight was unsuccessful.
See also
Monkeys and apes in space
Animals in space
Iranian Space Agency
References
Animals in space
Space program of Iran
Animal testing in Iran
Individual monkeys | Fargam | Chemistry,Biology | 421 |
1,945,347 | https://en.wikipedia.org/wiki/Link%20budget | A link budget is an accounting of all of the power gains and losses that a communication signal experiences in a telecommunication system; from a transmitter, through a communication medium such as radio waves, cable, waveguide, or optical fiber, to the receiver. It is an equation giving the received power from the transmitter power, after the attenuation of the transmitted signal due to propagation, as well as the antenna gains and feedline and other losses, and amplification of the signal in the receiver or any repeaters it passes through. A link budget is a design aid, calculated during the design of a communication system to determine the received power, to ensure that the information is received intelligibly with an adequate signal-to-noise ratio. Randomly varying channel gains such as fading are taken into account by adding some margin depending on the anticipated severity of its effects. The amount of margin required can be reduced by the use of mitigating techniques such as antenna diversity or multiple-input and multiple-output (MIMO).
A simple link budget equation looks like this:
Received power (dBm) = transmitted power (dBm) + gains (dB) − losses (dB)
Power levels are expressed in (dBm), Power gains and losses are expressed in decibels (dB), which is a logarithmic measurement, so adding decibels is equivalent to multiplying the actual power ratios.
In radio systems
For a line-of-sight radio system, the primary source of loss is the decrease of the signal power as it spreads over an increasing area while it propagates, proportional to the square of the distance (geometric spreading).
Transmitting antennas can be Omnidirectional, Directional, or Sectorial, depending on the way in which the antenna power is oriented. An omnidirectional antenna will distribute the power equally in every direction of a plane, so the radiation pattern has the shape of a sphere squeezed between two parallel flat surfaces. They are widely used in many applications, for instance in WiFi Access Points. Directional antennas concentrate the power in a specific direction, called the bore sight, and are widely used in point to point applications, like wireless bridges and satellite communications. Sectorial antennas concentrate the power in a wider region, typically embracing 45º, 60º, 90º or 120º. They are routinely deployed in Cellular towers.
Simplifications needed
The free space loss is easily calculated using Friis transmission equation which states that the loss is proportional to the square of the distance and the square of the frequency. Additionally losses are incurred in most radio links, including atmospheric attenuation by gases, rain, fog and clouds. Fading due to variations of the channel, multipath losses and antenna misalignment. In non line of sight links, diffraction and reflection losses are the most important since the direct path is not available.
Transmission line and polarization loss
In practical situations (deep space telecommunications, weak signal DXing etc.) other sources of signal loss must also be accounted for
The transmitting and receiving antennas may be partially cross-polarized.
The cabling between the radios and antennas may introduce significant additional loss.
Fresnel zone losses due to a partially obstructed line of sight path.
Doppler shift induced signal power losses in the receiver.
Endgame
If the estimated received power is sufficiently large (typically relative to the receiver sensitivity), which may be dependent on the communications protocol in use, the link will be useful for sending data. The amount by which the received power exceeds receiver sensitivity is called the link margin.
Equation
A link budget equation including all these effects, expressed logarithmically, might look like this:
where:
, received power (dBm)
, transmitter output power (dBm)
, transmitter antenna gain (dBi)
, transmitter losses (coax, connectors...) (dB)
, path loss, usually free space loss (dB)
, miscellaneous losses (fading margin, body loss, polarization mismatch, other losses, ...) (dB)
, receiver antenna gain (dBi)
, receiver losses (coax, connectors, ...) (dB)
The loss due to propagation between the transmitting and receiving antennas, often called the path loss, can be written in dimensionless form by normalizing the distance to the wavelength:
(where distance and wavelength are in the same units)
When substituted into the link budget equation above, the result is the logarithmic form of the Friis transmission equation.
In some cases, it is convenient to consider the loss due to distance and wavelength separately, but in that case, it is important to keep track of which units are being used, as each choice involves a differing constant offset. Some examples are provided below.
(dB) ≈ 32.45 dB + 20 log10[frequency (MHz)] + 20 log10[distance (km)]
(dB) ≈ −27.55 dB + 20 log10[frequency (MHz)] + 20 log10[distance (m)]
(dB) ≈ 36.6 dB + 20 log10[frequency (MHz)] + 20 log10[distance (miles)]
These alternative forms can be derived by substituting wavelength with the ratio of propagation velocity (c, approximately ) divided by frequency, and by inserting the proper conversion factors between km or miles and meters, and between MHz and (1/s).
Non-line-of-sight radio
Because of building obstructions such as walls and ceilings, propagation losses indoors can be significantly higher. This occurs because of a combination of attenuation by walls and ceilings, and blockage due to equipment, furniture, and even people.
For example, a "2 by 4" wood stud wall with drywall on both sides results in about 6 dB loss per wall at 2.4 GHz.
Older buildings may have even greater internal losses than new buildings due to materials and line of sight issues.
Experience has shown that line-of-sight propagation holds only for about the first 3 meters. Beyond 3 meters propagation losses indoors can increase at up to 30 dB per 30 meters in dense office environments. This is a good rule-of-thumb, in that it is conservative (it overstates path loss in most cases). Actual propagation losses may vary significantly depending on
building construction and layout.
The attenuation of the signal is highly dependent on the frequency of the signal.
In waveguides and cables
Guided media such as coaxial and twisted pair electrical cable, radio frequency waveguide and optical fiber have losses that are exponential with distance.
The path loss will be in terms of dB per unit distance.
This means that there is always a crossover distance beyond which the loss in a guided medium will exceed that of a line-of-sight path of the same length.
Long distance fiber-optic communication became practical only with the development of ultra-transparent glass fibers. A typical path loss for single-mode fiber is 0.2 dB/km, far lower than any other guided medium.
Earth–Moon–Earth communications
Link budgets are important in Earth–Moon–Earth communications. As the albedo of the Moon is very low (maximally 12% but usually closer to 7%), and the path loss over the 770,000 kilometre return distance is extreme (around 250 to 310dB depending on VHF-UHF band used, modulation format and Doppler shift effects), high power (more than 100 watts) and high-gain antennas (more than 20 dB) must be used.
In practice, this limits the use of this technique to the spectrum at VHF and above.
The Moon must be above the horizon in order for EME communications to be possible.
Voyager program
The Voyager program spacecraft have the highest known path loss (308dB as of 2002) and lowest link budgets of any telecommunications circuit. The Deep Space Network has been able to maintain the link at a higher than expected bitrate through a series of improvements, such as increasing the antenna size from 64m to 70m for a 1.2dB gain, and upgrading to low noise electronics for a 0.5dB gain in 2000–2001. During the Neptune flyby, in addition to the 70-m antenna, two 34-m antennas and twenty-seven 25-m antennas were used to increase the gain by 5.6dB, providing additional link margin to be used for a 4× increase in bitrate.
See also
Antenna gain-to-noise-temperature
Friis transmission equation
Isotropic radiator
Multipath propagation
Optical power budget
Radiation pattern
RF planning
References
External links
Link budget calculator for wireless LAN
Point-to-point link budget calculator
MUOS Link budget calculator/planner
Example LTE, GSM and UMTS Link Budgets
Python link budget calculator for satellites
Small satellites link budget (with python examples)
Budgets
Telecommunications engineering
Radio frequency propagation | Link budget | Physics,Engineering | 1,822 |
2,887,610 | https://en.wikipedia.org/wiki/Celgene | Celgene Corporation is a pharmaceutical company that makes cancer and immunology drugs. Its major product is Revlimid (lenalidomide), which is used in the treatment of multiple myeloma, and also in certain anemias. The company is incorporated in Delaware, headquartered in Summit, New Jersey, and a subsidiary of Bristol Myers Squibb (BMS).
History
Celgene was originally a unit of Celanese. In 1986, Celanese completed the corporate spin-off of Celgene following the merger of Celanese with American Hoechst.
In August 2000, Celgene acquired Signal Pharmaceuticals, Inc., a privately held company that developed pharmaceuticals to regulate disease-related genes. Signal Pharmaceuticals was rebranded as Celgene Research San Diego.
In December 2002, Celgene acquired Anthrogenesis, a privately held New Jersey–based biotherapeutics company and cord blood banking business, which is developing technology for the recovery of stem cells from placental tissues following the completion of full-term successful pregnancies. Anthrogenesis was rebranded as Celgene Cellular Therapeutics.
In 2006, Celgene certified McKesson Specialty, a specialty pharmacy, as one of a group of pharmacies contracted to launch lenalidomide (Revlimid). As a specialty drug, lenalidomide is only available through the a distribution network consisting of specialty pharmacies contracted by the company.
In March 2008, Celgene acquired Pharmion Corporation for $2.9 billion.
In January 2010, Celgene acquired Gloucester Pharmaceuticals.
In June 2010, Celgene agreed to acquire Abraxis BioScience. It purchased the biotechnology company for $2.9 billion in its expansion into drugs that attack solid tumors. Abraxis produced Abraxane, the cancer-fighting drug that can be given in high doses.
In November 2011, Celgene relocated its United Kingdom headquarters from Windsor, Berkshire, to Stockley Park, near Heathrow airport which is also the home of GlaxoSmithKline's UK operations.
In January 2012, Celgene agreed to acquire Avila Therapeutics, Inc., a privately held biotechnology company for $925 million, with $350 million in cash.
Citing a market capitalization of US$67 billion, and stock appreciation of 107%, Celgene was Forbes Magazine's number 2 ranked drug company of 2013.
In 2014, Celgene and OncoMed Pharmaceuticals joined a cancer stem cell therapeutic development agreement with demcizumab and five other biologics from OncoMed's pipeline. That same year, Sutro Biopharma entered into an agreement with Celgene Corporation to discover and develop multispecific antibodies and antibody drug conjugates (ADCs). This followed the December 2012 collaboration between the two companies and focused on the field of immuno-oncology.
In April 2015, Celgene announced a collaboration with AstraZeneca, worth $450 million, to study their Phase III immuno-oncology drug candidate MEDI4736.
That same month, Celgene announced it would acquire Quanticel for up to $485 million in order to enhance its cancer drug pipeline. Celgene had invested in Quanticel in April 2011.
In June 2015, Celgene announced it had licensed Lyceras RORgamma agonist portfolio for up to $105 million to develop its Phase I lead compound LYC-30937 for the treatment of inflammatory bowel disease. The licensing opportunity gave Celgene the option to acquire Lycera.
In July 2015, the company announced it would acquire Receptos for $7.2 billion in a move to strengthen the company's inflammation and immunology areas.
In May 2016, the company announced it would launch partnership with Agios Pharmaceuticals, developing metabolic immuno-oncology therapies.
In October 2016, the company acquired EngMab AG for $600 million.
In January 2017, the company announced it would acquire Delinia for $775 million, increasing the company's autoimmune disease therapy offerings.
In January 2018, Celgene announced it would acquire Impact Biomedicines for $7 billion, adding fedratinib, a kinase inhibitor with potential to treat myelofibrosis.
Also in January 2018, the company announced it would acquire Juno Therapeutics for $9 billion.
US headquarters in Summit, New Jersey
The company's Summit headquarters are located along the 7.3-mile main line of the abandoned Rahway Valley Railroad. Some have advocated for the railbed's conversion to a pedestrian and cyclist linear park and rail trail.
Acquisition by Bristol-Myers Squibb
In January 2019, the company announced it would be acquired by Bristol-Myers Squibb for $74 billion ($95 billion including debt), a deal that would become the largest pharmaceutical company acquisition ever. Celgene shareholders would receive one BMY share as well as $50 in cash for each Celgene share held, valuing Celgene at $102.43 a share; representing a 54% premium to the previous days closing price. The activist investor Starboard Value LP opposed the deal, nominating five alternative potential directors on the Bristol-Myers board. The deal was approved by shareholders in April 2019.
In August 2019, Amgen announced it would acquire the Otezla drug programme from Celgene for $13.4 billion, as part of Celgene and Bristol-Myers Squibb's merger deal. The Bristol-Myers acquisition closed on November 20, 2019.
In November 2019, Bristol-Myers Squibb (BMS) announced that it has completed its acquisition of Celgene following the receipt of regulatory approval from all government authorities required by the merger agreement and, as announced on April 12, 2019, approval by Bristol-Myers Squibb and Celgene stockholders.
Company origin and acquisition history
The following is an illustration of the company's major mergers and acquisitions and historical predecessors (this is not a comprehensive list):
Celgene (Spun off from Celanese in 1986, acquired by Bristol-Myers Squibb in 2019)
Signal Pharmaceuticals, Inc (Acq 2000)
Anthrogenesis (Acq 2002)
Pharmion Corporation (Acq 2008)
Gloucester Pharmaceuticals (Acq 2009)
Abraxis BioScience Inc (Acq 2010)
Avila Therapeutics, Inc (Acq 2012)
Quanticel (Acq 2015)
Receptos (Acq 2015)
EngMab AG (Acq 2016)
Delinia (Acq 2017)
Impact Biomedicines (Acq 2018)
Juno Therapeutics (Acq 2018)
AbVitro (Acq 2016)
RedoxTherapies (Acq 2016)
Executive history
In March 2016, Bob Hugin, the company's long serving CEO, retired from his position and took the role of executive chairman. Bob Hugin was succeeded in the CEO role by Mark Alles. At the same time, Jacqualyn Fouse was named as the company's president and COO; Fouse had joined the company in 2010 as the CFO. Effective June 30, 2017, Dr. Fouse will purportedly step down and be succeeded by Scott Smith, president of the company's Global Inflammation & Immunology Franchise, who joined the company in 2008. Dr. Fouse has been voted out by the board of directors on 2 April 2018.
Finances
For the fiscal year 2017, Celgene reported earnings of US$2.539 billion, with an annual revenue of US$13.003 billion, an increase of 15.8% over the previous fiscal cycle. Celgene's shares traded at over $74 per share, and its market capitalization was valued at over US$51.8 billion in November 2018.
Products
As of 2019, Celgene focused on oncology and immunology. Cancer drugs include Revlimid (lenalidomide) and Pomalyst (pomalidomide) and the immunology drug Otezla (apremilast) accounted for around 90% of the company's total revenue as of 2019.
Product-related history
In July 1998, Celgene received approval from the FDA to market Thalomid for the acute treatment of the cutaneous manifestations of moderate to severe ENL.
In April 2000, Celgene reached an agreement with Novartis Pharma AG to license d-MPH, Celgene's chirally pure version of RITALIN. The FDA subsequently granted approval to market d-MPH, or Focalin, in November 2001.
In December 2005, Celgene received approval from the FDA to market Revlimid for the treatment of patients with transfusion-dependent anemia due to Low- or Intermediate-1-risk MDS associated with a deletion 5q cytogenetic abnormality with or without additional cytogenetic abnormalities. Focalin XR was later launched by Celgene and Novartis in 2005.
In May 2006, Celgene received approval for Thalomid in combination with dexamethasone for the treatment of patients with newly diagnosed multiple myeloma.
In June 2007, Celgene received full marketing authorization for Revlimid in combination with dexamethasone as a treatment for patients with multiple myeloma who have received at least one prior therapy by the European Commission.
Pipeline
Ozanimod is an oral, sphingosine 1-phosphate (S1P) receptor modulator that binds with high affinity selectively to S1P subtypes 1 (S1P1) and 5 (S1P5). Ozanimod causes lymphocyte retention in lymphoid tissues. The mechanism by which ozanimod exerts therapeutic effects in multiple sclerosis is unknown, but may involve the reduction of lymphocyte migration into the central nervous system. Ozanimod is in development for immune-inflammatory indications including ulcerative colitis and Crohn's disease.
Celgene develops several products within several areas of research (MM, MDS, AML, Lymphoma, CLL, Beta-Thalassemia, Myelofibrosis, Solid Tumors, Inflammation & Immunology.
Litigation
Antitrust allegations
In 2009, Dr. Reddy's Laboratories requested, and Celgene refused to provide, a samples of Celgene's anticancer drug THALOMID (thalidomide). Dr. Reddy's Laboratories sought the material for bioequivalency studies required to bring its own, generic, version of thalidomide to market. In response to the refusal, Dr. Reddy's Laboratories filed a Citizen's Petition with the FDA asking the Agency to adopt procedures that would ensure generic applicants the right to buy sufficient samples to perform bioequivalence testing of drugs that were subject to REMS distribution restrictions.
Celgene denied that it had behaved anti-competitively, arguing that the legislative history strongly suggested that Congress considered and rejected a proposed guaranteed access procedure like the one proposed by Dr. Reddy's. Celgene further argued that requiring innovator companies to sell their products to potential generic competitors would violate its intellectual property rights and subject it to liability risks in the event that patients were harmed in Dr. Reddy's studies.
In 2018, Celgene was at the top of a list of companies that the FDA identified as refusing to release samples to competitors to create generics.
Generic manufacturer Lannett Company initiated antitrust litigation that accused Celgene of using its REMS for THALOMID (thalidomide) to violate the anti-monopolization provisions of the Sherman Act. In early 2011, the district court denied Celgene's motion to dismiss. The case was set for trial beginning in February 2012, but the parties settled before the trial began, thereby postponing further judicial review of antitrust claims premised on alleged abuse of REMS distribution restrictions.
Fraud allegations
In July 2017, Celgene agreed to pay $280 million to government agencies to settle allegations that it caused the submission of false claims or fraudulent claims for non-reimbursable uses of its drugs Revlimid and Thalomid to Medicare and state Medicaid programs. In its July 2017 10-Q, Celgene disclosed that it resolved the matter in full for $315 million, including fees and expenses. The case was brought under the False Claims Act by Beverly Brown, a former Celgene sales representative.
See also
Biotech and pharmaceutical companies in the New York metropolitan area
Pharmaceutical industry in Switzerland
Sutro Biopharma
References
External links
Bristol Myers Squibb
Pharmaceutical companies established in 1986
Biotechnology companies of the United States
Companies based in Union County, New Jersey
Summit, New Jersey
1986 establishments in New Jersey
Pharmaceutical companies based in New Jersey
Life sciences industry
Companies formerly listed on the Nasdaq
American companies established in 1986
Biotechnology companies established in 1986
1980s initial public offerings
2019 mergers and acquisitions | Celgene | Biology | 2,702 |
16,991,711 | https://en.wikipedia.org/wiki/Temocapril | Temocapril (also known as temocaprilum [Latin]; brand name Acecol) is an ACE inhibitor. It was not approved for use in the US.
It is administered as inactive prodrug, then converted to its active metabolite, temocaprilat.
It was patented in 1984 and approved for medical use in 1994.
References
ACE inhibitors
Acetic acids
Enantiopure drugs
Ethyl esters
Lactams
Prodrugs
Thiophenes
Carboxylate esters
Thiazepines | Temocapril | Chemistry | 114 |
44,211,647 | https://en.wikipedia.org/wiki/Glass-filled%20polymer | Glass-filled polymer (or glass-filled plastic), is a mouldable composite material. It comprises short glass fibers in a matrix of a polymer material. It is used to manufacture a wide range of structural components by injection or compression moulding. It is an ideal glass alternative that offers flexibility in the part, chemical resistance, shatter resistance and overall better durability.
Materials
Either thermoplastic or thermosetting polymers may be used. One of the most widely used thermoplastics is a polyamide polymer nylon.
The first mouldable composite was Bakelite. This used wood flour fibres in phenolic resin as the thermoset polymer matrix. As the fibres were only short this material had relatively low bulk strength, but still improved surface hardness and good mouldability.
A wide range of polymers are now produced in glass-filled varieties, including polyamide (Nylon), acetal homopolymers and copolymers, polyester, polyphenylene oxide (PPO / Noryl), polycarbonate, polyethersulphone
Bulk moulding compound is a pre-mixed material of resin and fibres supplied for moulding. Some are thermoplastic or thermosetting, others are chemically cured and are mixed with a catalyst (polyester) or hardener (epoxy) before moulding.
Applications
Compared to the native polymer, glass-filled materials have improved mechanical properties of rigidity, strength and may also have improved surface hardness.
Compared to sheet materials
Bulk glass filled materials are considered distinct from fibreglass or fibre-reinforced plastic materials. These use a substrate of fabric sheets made from long fibres, draped to shape in a mould and then impregnated with resin. They are usually moulded into shapes made of large but thin sheets. Filled materials, in contrast, are used for applications that are thicker or of varying section and not usually as large as sheet materials.
References
Composite materials
Polymers
Fibre-reinforced polymers | Glass-filled polymer | Physics,Chemistry,Materials_science,Engineering | 420 |
3,203,826 | https://en.wikipedia.org/wiki/Amagat | An amagat (denoted amg or Am) is a practical unit of volumetric number density. Although it can be applied to any substance at any conditions, it is defined as the number of ideal gas molecules per unit volume at 1 atm (101.325 kPa) and 0 °C (273.15 K). It is named after Émile Amagat, who also has Amagat's law named after him.
SI conversion
The amg unit for number density can be converted to the SI unit of moles per cubic meter (mol/m3) by the formula
where
≘ indicates correspondence, since the SI unit is of molar concentration and not number density;
is the Loschmidt number;
is the Avogadro constant.
The number density of an ideal gas at absolute pressure and absolute temperature can be calculated as
where = 273.15 K, and = 101.325 kPa (STP before 1982).
Example
Number density of an ideal gas (such as air) at room temperature (20 °C) and 1 atm (101.325 kPa) is
References
Amount of substance
Units of density
Physical chemistry | Amagat | Physics,Chemistry,Mathematics | 229 |
13,784,158 | https://en.wikipedia.org/wiki/Biotechnology%20and%20Applied%20Biochemistry | Biotechnology and Applied Biochemistry is a bimonthly peer-reviewed scientific journal covering biotechnology applied to medicine, veterinary medicine, and diagnostics. Topics covered include the expression, extraction, purification, formulation, stability, and characterization of both natural and recombinant biological molecules. It is published by Wiley-Blackwell on behalf of the International Union of Biochemistry and Molecular Biology. The editors-in-chief are Gianfranco Gilardi (University of Torino) and Jian-Jiang Zhong (Shanghai Jiao Tong University).
History
The journal was established in 1979 under the title Journal of Applied Biochemistry by Academic Press, obtaining its present title in 1986.
Former editors-in-chief include Peter Campbell (University College London; before 1996), Roger Lundblad (formerly of Baxter Biotech, Duarte, California; 1996–2002), and Parviz A. Shamlou (Eli Lilly; 2003–2012).
Abstracting and indexing
The journal is abstracted and indexed by:
According to the Journal Citation Reports, the journal has a 2019 impact factor of 1.638.
References
External links
Academic journals established in 1979
Biochemistry journals
Biotechnology journals
English-language journals
Wiley-Blackwell academic journals
Bimonthly journals | Biotechnology and Applied Biochemistry | Chemistry,Biology | 248 |
44,636,079 | https://en.wikipedia.org/wiki/Constructive%20cooperative%20coevolution | The constructive cooperative coevolutionary algorithm (also called C3) is a global optimisation algorithm in artificial intelligence based on the multi-start architecture of the greedy randomized adaptive search procedure (GRASP). It incorporates the existing cooperative coevolutionary algorithm (CC). The considered problem is decomposed into subproblems. These subproblems are optimised separately while exchanging information in order to solve the complete problem. An optimisation algorithm, usually but not necessarily an evolutionary algorithm, is embedded in C3 for optimising those subproblems. The nature of the embedded optimisation algorithm determines whether C3's behaviour is deterministic or stochastic.
The C3 optimisation algorithm was originally designed for simulation-based optimisation but it can be used for global optimisation problems in general. Its strength over other optimisation algorithms, specifically cooperative coevolution, is that it is better able to handle non-separable optimisation problems.
An improved version was proposed later, called the Improved Constructive Cooperative Coevolutionary Differential Evolution (C3iDE), which removes several limitations with the previous version. A novel element of C3iDE is the advanced initialisation of the subpopulations. C3iDE initially optimises the subpopulations in a partially co-adaptive fashion. During the initial optimisation of a subpopulation, only a subset of the other subcomponents is considered for the co-adaptation. This subset increases stepwise until all subcomponents are considered. This makes C3iDE very effective on large-scale global optimisation problems (up to 1000 dimensions) compared to cooperative coevolutionary algorithm (CC) and Differential evolution.
The improved algorithm has then been adapted for multi-objective optimization.
Algorithm
As shown in the pseudo code below, an iteration of C3 exists of two phases. In Phase I, the constructive phase, a feasible solution for the entire problem is constructed in a stepwise manner. Considering a different subproblem in each step. After the final step, all subproblems are considered and a solution for the complete problem has been constructed. This constructed solution is then used as the initial solution in Phase II, the local improvement phase. The CC algorithm is employed to further optimise the constructed solution. A cycle of Phase II includes optimising the subproblems separately while keeping the parameters of the other subproblems fixed to a central blackboard solution. When this is done for each subproblem, the found solution are combined during a "collaboration" step, and the best one among the produced combinations becomes the blackboard solution for the next cycle. In the next cycle, the same is repeated. Phase II, and thereby the current iteration, are terminated when the search of the CC algorithm stagnates and no significantly better solutions are being found. Then, the next iteration is started. At the start of the next iteration, a new feasible solution is constructed, utilising solutions that were found during the Phase I of the previous . This constructed solution is then used as the initial solution in Phase II in the same way as in the first iteration. This is repeated until one of the termination criteria for the optimisation is reached, e.g. a maximum number of evaluations.
{Sphase1} ← ∅
while termination criteria not satisfied do
if {Sphase1} = ∅ then
{Sphase1} ← SubOpt(∅, 1)
end if
while pphase1 not completely constructed do
pphase1 ← GetBest({Sphase1})
{Sphase1} ← SubOpt(pphase1, inext subproblem)
end while
pphase2 ← GetBest({Sphase1})
while not stagnate do
{Sphase2} ← ∅
for each subproblem i do
{Sphase2} ← SubOpt(pphase2,i)
end for
{Sphase2} ← Collab({Sphase2})
pphase2 ← GetBest({Sphase2})
end while
end while
Multi-objective optimisation
The multi-objective version of the C3 algorithm is a Pareto-based algorithm which uses the same divide-and-conquer strategy as the single-objective C3 optimisation algorithm . The algorithm again starts with the advanced constructive initial optimisations of the subpopulations, considering an increasing subset of subproblems. The subset increases until the entire set of all subproblems is included. During these initial optimisations, the subpopulation of the latest included subproblem is evolved by a multi-objective evolutionary algorithm. For the fitness calculations of the members of the subpopulation, they are combined with a collaborator solution from each of the previously optimised subpopulations. Once all subproblems' subpopulations have been initially optimised, the multi-objective C3 optimisation algorithm continues to optimise each subproblem in a round-robin fashion, but now collaborator solutions from all other subproblems' subspopulations are combined with the member of the subpopulation that is being evaluated. The collaborator solution is selected randomly from the solutions that make up the Pareto-optimal front of the subpopulation. The fitness assignment to the collaborator solutions is done in an optimistic fashion (i.e. an "old" fitness value is replaced when the new one is better).
Applications
The constructive cooperative coevolution algorithm has been applied to different types of problems, e.g. a set of standard benchmark functions, optimisation of sheet metal press lines and interacting production stations. The C3 algorithm has been embedded with, amongst others, the differential evolution algorithm and the particle swarm optimiser for the subproblem optimisations.
See also
Cooperative coevolution
Metaheuristic
Stochastic search
Differential evolution
Swarm intelligence
Genetic algorithms
Hyper-heuristics
References
Evolutionary algorithms
Evolutionary computation | Constructive cooperative coevolution | Biology | 1,279 |
27,138,990 | https://en.wikipedia.org/wiki/Fabasoft%20Folio%20Cloud | Fabasoft Folio Cloud is a cloud computing service developed by Fabasoft in Linz, Austria announced in April 2010. It focuses on enabling secure collaboration and is web-based with iOS and Android apps for use on mobile devices. The software is object-oriented and offers a wide range of sophisticated functionality for document management and global collaboration, which can be extended by specialist cloud applications. Fabasoft places a large amount of focus on usability and accessibility.
Security
Folio Cloud is certified and tested according to the following security standards : ISO 27001:2005, ISO 20000, ISO 9001, SAS 70 Type II. Fabasoft was also the first software manufacturer to receive MoReq2 certification – the European standard for records management.
All Folio Cloud data is saved in data centers in Europe, where European standards for security, reliability and data protection apply. Cloud data is kept permanently synchronized in two mirrored data centers in Austria so that a fail over is possible at any time. A backup of data is constantly maintained in a third data center. Further data center locations are being integrated in Germany and Switzerland and in future users will be able to decide at which data center location their data is stored.
Folio Cloud is based on open source and does not contain any US-owned software. This prevents access to European cloud data by US authorities under the “US Patriot Act”.
All communication and transfer of data within Folio Cloud is encrypted via SSL/TLS. Cloud access is protected by secure forms of authentication including two factor authentication with Motoky or SMS and login via digital ID. Folio Cloud has integrated the new German digital ID card, the Austrian Citizen Card with mobile signature and the SuisseID as forms of digital authentication. Fabasoft is active in the support of the advancement of European cloud infrastructure.
Mobile cloud
Folio Cloud supports all common web browsers, different operating systems and end user devices. Folio Cloud apps are also available on Google Play and the Apple App Store for use on Android and iOS devices. Folio Cloud supports open standards such as WebDAV, CalDAV and CMIS.
Apps
Apps are online applications that extend the functionality of Folio Cloud to fulfill concrete use cases and needs. All Folio Cloud Apps are available in the Fabasoft Cloud App Store.
Fabasoft held its first Cloud Developer Conference (CDC) from December 15–17, 2010 as a free event for Cloud developers. Since then the event has taken place twice a year, once in the summer and once in the winter.
References
External links
Official Folio Cloud Website
Cloud platforms
Centralized computing
Technology companies of Austria | Fabasoft Folio Cloud | Technology | 535 |
26,313,115 | https://en.wikipedia.org/wiki/C18H12O6 | {{DISPLAYTITLE:C18H12O6}}
The molecular formula C18H12O6 (molar mass: 324.28 g/mol, exact mass: 324.0634 u) may refer to:
Atromentin
Hexahydroxytriphenylene (HHTP)
Sterigmatocystin
References
Molecular formulas | C18H12O6 | Physics,Chemistry | 79 |
4,224,990 | https://en.wikipedia.org/wiki/Marjorie%20Clarke | Marjorie J. "Maggie" Clarke is an American environmental scientist who specializes in recycling participation, waste prevention methods, waste-to-energy/incinerator emissions controls, environmental impacts of the World Trade Center fires and collapse, and community botanical gardening. Since the September 11, 2001 attacks she has focused on increasing participation in New York City's waste prevention and recycling programs.
Early life and education
She was born on July 14, 1953, in Miami, Florida. She graduated in 1975 with a B.A. in geology from Smith College. She received an M.S. in environmental science from Johns Hopkins University in 1978 and in energy technology from New York University in 1982. She completed a Ph.D. in 2000 for environmental sciences.
Career and research
Clarke was the Department of Sanitation's specialist on emissions from incinerators from 1984 to 1988 and served on a National Academy of Sciences committee on Health Effects of Waste Incineration.
From 2002 to 2004, she was a scientist-in-residence and adjunct assistant professor at Lehman College, and an adjunct professor at Hunter College, City University of New York from 1996 to 2005.
Clarke is a persistent questioner of United States Environmental Protection Agency's claims about the safety of the World Trade Center site.
She also conceived and garnered support for a New York City local law to eliminate 2200 apartment building incinerators which was signed into law in 1989.
NGO participation
Clarke has been chair or vice chair of the Manhattan Citizens' Solid Waste Advisory Board for 8 of the years since its inception in 1990.cShe co-founded and has been president of the Riverside-Inwood Neighborhood Garden (RING), a volunteer botanical garden in Upper Manhattan, since 1984.
See also
Health effects arising from the September 11, 2001 attacks
References
External links
www.maggieclarkeenvironmental.com - Papers and testimony
1953 births
Living people
American environmentalists
American women environmentalists
American environmental scientists
People from Miami
Smith College alumni
Johns Hopkins University alumni
New York University alumni
CUNY Graduate Center alumni
Lehman College faculty
Hunter College faculty
American scientists
21st-century American women | Marjorie Clarke | Environmental_science | 422 |
59,645,501 | https://en.wikipedia.org/wiki/Polyploviricotina | Polyploviricotina is a subphylum of viruses in the phylum Negarnaviricota. It is one of only two virus subphyla, the other being Haploviricotina, which is also in Negarnaviricota. The name comes from , the Ancient Greek for 'complex', along with the suffix for a virus subphylum; 'viricotina'.
References
External links
Invasion of the Body Snatchers: Viruses Can Steal Our Genetic Code to Create New Human-Virus Genes; on: SciTechDaily, Source: Mount Sinai School of Medicine; August 9, 2020
Jessica Sook Yuin Ho, Matthew Angel, Yixuan Ma, Jonathan W. Yewdell, Edward Hutchinson, Ivan Marazzi, et al.: Hybrid Gene Origination Creates Human-Virus Chimeric Proteins during Infection; in: Cell Volume 181, Issue 7; June 18, 2020; doi:10.1016/j.cell.2020.05.035
Yixuan Ma, Matthew Angel, Guojun Wang, Jessica Sook Yuin Ho, Nan Zhao, Justine Noel, Natasha Moshkina, et al.: Discovery of UFO Proteins: Human-Virus Chimeric Proteins Generated During Influenza Virus Infection; on: bioRxiv; April 8, 2019; doi:10.1101/597617
Negarnaviricota
Virus subphyla | Polyploviricotina | Biology | 295 |
243,904 | https://en.wikipedia.org/wiki/Galton%20board | The Galton board, also known as the Galton box or quincunx or bean machine (or incorrectly Dalton board), is a device invented by Francis Galton to demonstrate the central limit theorem, in particular that with sufficient sample size the binomial distribution approximates a normal distribution.
Galton designed it to illustrate his idea of regression to the mean, which he called "reversion to mediocrity" and made part of his eugenist ideology.
Description
The Galton board consists of a vertical board with interleaved rows of pegs. Beads are dropped from the top and, when the device is level, bounce either left or right as they hit the pegs. Eventually they are collected into bins at the bottom, where the height of bead columns accumulated in the bins approximate a bell curve. Overlaying Pascal's triangle onto the pins shows the number of different paths that can be taken to get to each bin.
Large-scale working models of this device created by Charles and Ray Eames can be seen in the Mathematica: A World of Numbers... and Beyond exhibits permanently on view at the Boston Museum of Science, the New York Hall of Science, or the Henry Ford Museum. The Ford Museum machine was displayed at the IBM Pavilion during 1964-65 New York World's Fair, later appearing at Pacific Science Center in Seattle. Another large-scale version is displayed in the lobby of Index Fund Advisors in Irvine, California.
Boards can be constructed for other distributions by changing the shape of the pins or biasing them towards one direction, and even bimodal boards are possible. A board for the log-normal distribution (common in many natural processes, particularly biological ones), which uses isosceles triangles of varying widths to 'multiply' the distance the bead travels instead of fixed sizes steps which would 'sum', was constructed by Jacobus Kapteyn while studying and popularizing the statistics of the log-normal in order to help visualize it and demonstrate its plausibility. As of 1963, it was preserved in the University of Groningen. There is also an improved log-normal machine that uses skewed triangles whose right sides are longer, and thus avoiding shifting the median of the beads to the left.
Distribution of the beads
If a bead bounces to the right k times on its way down (and to the left on the remaining pegs) it ends up in the kth bin counting from the left. Denoting the number of rows of pegs in a Galton Board by n, the number of paths to the kth bin on the bottom is given by the binomial coefficient . Note that the leftmost bin is the 0-bin, next to it is the 1-bin, etc. and the furthest one to the right is the n-bin - making thus the total number of bins equal to n+1 (each row does not need to have more pegs than the number that identifies the row itself, e.g. the first row has 1 peg, the second 2 pegs, until the n-th row that has n pegs which correspond to the n+1 bins). If the probability of bouncing right on a peg is p (which equals 0.5 on an unbiased level machine) the probability that the ball ends up in the kth bin equals . This is the probability mass function of a binomial distribution. The number of rows correspond to the size of a binomial distribution in number of trials, while the probability p of each pin is the binomial's p.
According to the central limit theorem (more specifically, the de Moivre–Laplace theorem), the binomial distribution approximates the normal distribution provided that the number of rows and the number of balls are both large. Varying the rows will result in different standard deviations or widths of the bell-shaped curve or the normal distribution in the bins.
Another interpretation more accurate from the physical view is given by the Entropy: since the energy that is carried by every falling bead is finite, so even that on any tip their collisions are chaotic because the derivative is undefined (there is no way to previously figure out for which side is going to fall), the mean and variance of each bean is restricted to be finite (they will never bound out of the box), and the Gaussian shape arises because it is the maximum entropy probability distribution for a continuous process with defined mean and variance. The rise of the normal distribution could be interpreted as that all possible information carried by each bean related to which path it has travelled has been already completely lost through their downhill collisions.
Examples
History
Francis Galton designed his board as part of a presentation for the Royal Institution Discourses on February 27th, 1874. His goal was to promote the use of ranking instead of measurement in statistics, so that qualities such as intelligence could be assigned numbers without requiring experimental data. The piling of the balls into a normal distribution was supposed to illustrate how a mean value would emerge from multiple tests.Francis Galton wrote in 1889 his book Natural Inheritance:Order in Apparent Chaos: I know of scarcely anything so apt to impress the imagination as the wonderful form of cosmic order expressed by the Law of Frequency of Error. The law would have been personified by the Greeks and deified, if they had known of it. It reigns with serenity and in complete self-effacement amidst the wildest confusion. The huger the mob, and the greater the apparent anarchy, the more perfect is its sway. It is the supreme law of Unreason. Whenever a large sample of chaotic elements are taken in hand and marshalled in the order of their magnitude, an unsuspected and most beautiful form of regularity proves to have been latent all along.However, Galton also wished to demonstrate that extreme values of intelligence result from heredity, in apparent contradiction with his experiment since it produces extreme values as dispersion from randomness alone. Aware of this issue, he tried to address it in 1875 by arguing that his box did not reflect situations where bias would be introduced by what he called a main influence factor.
In a 1877 letter to George Darwin, Galton described a second version of the board, with two stages, where the compartments at the bottom of the highest stage had small trapdoors that would allow the balls of one chosen compartment to fall through the second stage. His goal was to illustrate his concept of "reversion to mediocrity", i.e. that without marriage control the "best" parts of the population would mix with the "mediocre", so that their offspring would gradually revert towards an average value. This version, however, was not built.
Games
Several games have been developed using the idea of pins changing the route of balls or other objects:
Bagatelle
Pachinko
Payazzo
Peggle
Pinball
Plinko
The Wall
It is suggested that these games provided inspiration for Galton's device.
External links
Galton Board informational website with resource links
An Sir Francis: the Probability Machine - From Chaos to Order - Randomness in Stock Prices from Index Fund Advisors IFA.com
Quincunx and its relationship to normal distribution from Math Is Fun
A multi-stage bean machine simulation (JS)
Pascal's Marble Run: a deterministic Galton board
Log-normal Galton board (animation)
A music video featuring a Galton board by Carl McTague
References
Central limit theorem
Normal distribution
Data and information visualization | Galton board | Mathematics | 1,552 |
7,857,807 | https://en.wikipedia.org/wiki/Helio%20Kickflip | The Kickflip, produced by VK Mobile, was one of Helio's two launch devices and was marketed heavily to MySpace users. The Kickflip is a swiveling cell phone, white in color and with a flat (screen-only) front. Some of the features included 2-megapixel camera, 90 minutes of video recording, side buttons, QVGA screen, and 8 day stand by/3 hour talk time battery life. Reviewers at PC Magazine and Infosync lauded the phones design aspects, but noted the lack of bluetooth capabilities and a wide range of bugs in the phone applications which affected the basic functionality of the phone.
References
Helio (wireless carrier)
Personal digital assistants | Helio Kickflip | Technology | 148 |
4,698,763 | https://en.wikipedia.org/wiki/Plus-size%20clothing | Plus-size clothing is clothing proportioned specifically for people above the average clothing size. The application of the term varies from country to country, and according to which industry the person is involved in.
According to PLUS Model magazine, "In the fashion industry, plus size is identified as sizes 18 and over, or sizes 1X-6X and extended size as 7X and up". The article continues "Susan Barone [...] shared, 'Plus sizes are sizes 14W – 24W. Super sizes and extended sizes are used interchangeably for sizes 26W and above. Sometimes the size 26W is included in plus size'."
Such clothing has also been called outsize in Britain, a term that has been losing favor. One example of this is the renaming of "Evans Outsize" to simply "Evans", as well as losing their advertising slogan "Evans – The Outsize Shop", which also featured on their clothing labels. A related term for men's plus-size clothing is big and tall (a phrase also used as a trademark in some countries).
History
Lane Bryant began trading in the early 1900s as a producer of clothing for "Expectant Mothers and Newborn"'. By the early 1920s, Lane Bryant started selling clothing under the category 'For the Stout Women', which ranged between a 38-56 inch bustline. Evans, a UK-based plus-size retailer, was founded in 1930. In the 1920s, small boys' clothing store, Brody's in Oak Park Mich (now Bloomfield) started the "Husky" size clothing.
The large-size fashion revolution of 1977–1998 in the US began after the Fashion Group of NYC released a study predicting the demise of the Baby Boomer Junior Market, as the Boomers were coming of age. Mary Duffy's Big Beauties was the first model agency to work with hundreds of new plus-size clothing lines and advertisers. For two decades, this plus-size category produced the largest per annum percentage increases in ready-to-wear retailing.
Max Mara started Marina Rinaldi, one of the first high-end clothing lines, for plus-size women in 1980.
The first plus-size fashion line to show at Mercedes Benz Fashion Week was Cabiria, featured in the Fashion Law Institute fashion show in the tents at Lincoln Center on September 6, 2013.
On February 6, 2019, luxury e-tailer 11 Honoré, which sells designer clothing in sizes 12 to 24, opened New York Fashion Week with a fashion show focused on size inclusivity. The runway show featured looks from Christian Siriano, Prabal Gurung, Cushnie and Brandon Maxwell. Actress Laverne Cox closed the show wearing a custom dress by designer Zac Posen.
In June 2024, a Fashion Nova campaign promoted as body-positive faced significant backlash for its lack of body diversity. Critics on Instagram highlighted that the campaign predominantly featured models with flat stomachs and hourglass figures, neglecting to represent a wider range of body types, such as those with stretch marks and larger bellies. This controversy underscored the ongoing debate about true inclusivity in fashion marketing.
Consumer reports
Plus-size clothing patterns have traditionally been graded up from a smaller construction pattern. However, many retailers use statistical data collected from their own measuring projects, and from specialized Body Scan Data collection projects to modernize the fit and construction of their garments. U.S. companies Lane Bryant and Catherines teamed up over a three-year period to source data to modernize the companies' garment construction. 14,000 women were measured in what was the most extensive female sizing study in the U.S. in more than 60 years.
Market
Australia
The Australian plus-size clothing market has been growing since at least 1994, with major department stores such as David Jones, Myer, and Target producing their own brand ranges, and an increase in the number of individual boutiques and national chain store outlets across the country. Sizing in Australia is not synchronous with the US; plus-size garments are considered to be size 16 and upward, which is the equivalent of a US size 12. A recent study conducted by IBISWorld reports that "65.2% of the population aged 18 and over are expected to be overweight or obese in 2017-18." This is resulting in more interest and competition in the wider fashion industry, and as such resulting in more department stores stocking plus-size clothing.
Notable Australian chain store brands for plus-size clothing include Maggie T, Autograph (formerly 1626), Johnny Bigg, Free People and City Chic (formerly Big City Chic). There is also a boom in Australian designer independent plus size labels such as Camilla Jayne, Curvy Chic Sports, Hope & Harvest, 17 Sundays, Sonsee, Lowanna Australia, and Harlow.
United Kingdom
In the UK there are more than 60 brands for plus-size women's clothing; however, only a small number of these brands are manufactured or owned by UK-based companies. High-street stores such as Yours Clothing, Elvi, Evans, Ann Harvey, Dea London and BeigePlus sell only plus-sized garments, while many other brands and department stores carry extended sizes in their shelves, such as Debenhams, River Island, ASOS, Fenwicks and New Look. More recently, stores specifically supplying plus-size sportswear, fitness wear and bras have opened such as State of Mind, Charlotte Jackson, Eve Activewear, and We Fit In. Notable online sites also include ASOS.com, Dearcurves.com and Style908. Anna Scholz has been creating clothes for the high end market since 1995.
Specialist plus-size brands (found in independent plus-size shops) known to be active in the UK (2010) include: Hebbeding (the Netherlands), Dearcurves(UK), Escaladya (Germany), Martine Samoun (Belgium), Marina Rinaldi (Italy), Persona (Italy), Elena Grunert (Germany), Elena Miro (Italy), Verpass (Germany), Chalou (Germany), Kirsten Krog (Denmark), Wille (Germany), Jomhoy (Spain), Yoek (Netherlands), Be The Queen (France), Alain Weiz (France), Tummy Tuck Not Your Daughters Jeans NYDJ (USA), Anathea by Didier Parakian (France), Fred Sabatier (France), Tia (Denmark), Rofa (Germany), Jorli (Denmark), NP (Finland), OpenEnd (Germany), Sumissura (Switzerland), A Big Attitude (USA), Terry Precision Cycling (USA), and Carmakoma (Denmark).
In November 2013, the Debenhams department store chain indicated that it plans to add Size 16 plus-size mannequins in all 170 UK stores.
United States
Notable women's specialty plus-size clothing retail market include Lane Bryant (Ascena Retail Group), Avenue (Avenue Stores, LLC), Torrid, and Ashley Stewart (Ashley Stewart, Inc.).
Walmart also offers a limited but inexpensive plus-size apparel line. The department stores J. C. Penney, Kohl's and Macy's also offer plus-size apparel. Torrid is a retailer geared toward plus-size young adults. International online retailers, such as Simply Be (N Brown) from the UK have started marketing in the United States. Part & Parcel, a social commerce company focused exclusively on clothing for plus-size women, launched in May 2019.
On the men's side, Destination XL Group, Inc. is a major specialty retailer of men's big and tall apparel, with over 300 retail stores throughout the United States, Canada and London, England.
See also
Plus-size model
Fit model
Inclusive sizing
Notes
The purpose of the study is to determine the current average clothing size of adult American women. Secondary data of average body measurements from the most recently published National Health and Nutritional Examination Surveys were compared to ASTM International industry clothing size standards.
References
Sizes in clothing
Fashion articles needing expert attention | Plus-size clothing | Physics,Mathematics | 1,697 |
28,357,868 | https://en.wikipedia.org/wiki/Remarks%20on%20the%20Foundations%20of%20Mathematics | Remarks on the Foundations of Mathematics () is a book of Ludwig Wittgenstein's notes on the philosophy of mathematics. It has been translated from German to English by G.E.M. Anscombe, edited by G.H. von Wright and Rush Rhees, and published first in 1956. The text has been produced from passages in various sources by selection and editing. The notes have been written during the years 1937–1944 and a few passages are incorporated in the Philosophical Investigations which were composed later.
When the book appeared it received many negative reviews mostly from working logicians and mathematicians, among them Michael Dummett, Paul Bernays, and Georg Kreisel. Kreisel's scathing review received particular attention although he later distanced himself from it.
In later years however it received more positive reviews. Today Remarks on the Foundations of Mathematics is read mostly by philosophers sympathetic to Wittgenstein and they tend to adopt a more positive stance.
Wittgenstein's philosophy of mathematics is exposed chiefly by simple examples on which further skeptical comments are made. The text offers an extended analysis of the concept of mathematical proof and an exploration of Wittgenstein's contention that philosophical considerations introduce false problems in mathematics. Wittgenstein in the Remarks adopts an attitude of doubt in opposition to much orthodoxy in the philosophy of mathematics.
Particularly controversial in the Remarks was Wittgenstein's "notorious paragraph", which contained an unusual commentary on Gödel's incompleteness theorems. Multiple commentators read Wittgenstein as misunderstanding Gödel. In 2000 Juliet Floyd and Hilary Putnam suggested that the majority of commentary misunderstands
Wittgenstein but their interpretation has not been met with approval.
Wittgenstein wrote
The debate has been running around the so-called Key Claim: If one assumes that P is provable in PM, then one should give up the "translation" of P by the English sentence "P is not provable".
Wittgenstein does not mention the name of Kurt Gödel who was a member of the Vienna Circle during the period in which Wittgenstein's early ideal language philosophy and Tractatus Logico-Philosophicus dominated the circle's thinking; multiple writings of Gödel in his Nachlass contain his own antipathy for Wittgenstein, and belief that Wittgenstein wilfully misread the theorems. Some commentators, such as Rebecca Goldstein, have hypothesized that Gödel developed his logical theorems in opposition to Wittgenstein.
References
External links
Sorin Bangu, Ludwig Wittgenstein: Later Philosophy of Mathematics, IEP
Victor Rodych, Wittgenstein's Philosophy of Mathematics, The Stanford Encyclopedia of Philosophy
1953 non-fiction books
Books by Ludwig Wittgenstein
Philosophy of mathematics literature
Logic literature | Remarks on the Foundations of Mathematics | Mathematics | 586 |
1,696,611 | https://en.wikipedia.org/wiki/Archy%20%28software%29 | Archy is a software system that had a user interface that introduced a different approach for interacting with computers with respect to traditional graphical user interfaces. Designed by human-computer interface expert Jef Raskin, it embodies his ideas and established results about human-centered design described in his book The Humane Interface. These ideas include content persistence, modelessness, a nucleus with commands instead of applications, navigation using incremental text search, and a zooming user interface (ZUI). The system was being implemented at the Raskin Center for Humane Interfaces under Raskin's leadership. Since his death in February 2005 the project was continued by his team, which later shifted focus to the Ubiquity extension for the Firefox browser.
Archy in large part builds on Raskin's earlier work with the Apple Macintosh, Canon Cat, SwyftWare, and Ken Perlin's Pad ZUI system. It can be described as a combination of Canon Cat's text processing functions with a modern ZUI. Archy is more radically different from established systems than are Sun Microsystems' Project Looking Glass and Microsoft Research's "Task Gallery" prototype. While these systems build upon the WIMP desktop paradigm, Archy has been compared as similar to the Emacs text editor, although its design begins from a clean slate.
Archy used to be called The Humane Environment ("THE"). On January 1, 2005, Raskin announced the new name, and that Archy would be further developed by the non-profit Raskin Center for Humane Interfaces. The name "Archy" is a play on the Center's acronym, R-CHI. It is also an allusion to Don Marquis' archy and mehitabel poetry. Jef Raskin jokingly stated: "Yes, we named our software after a bug" (a cockroach), further playing with the meaning of bugs in software.
Basic concept
The stated goal of Archy is to design a software system starting from an understanding of human cognition and the needs of the user, rather than from a software, hardware, or marketing viewpoint. It aims to be usable by disabled persons, the technology-averse, as well as computer specialists. This ambitious plan to build a general purpose environment that is easy to use for anyone is based on designing for the common cognitive capabilities of all humans.
The plan includes making the interface as "modeless" as possible, to avoid mode errors and encourage habituation. In order to achieve this, modal features of current graphical user interfaces, like windows and separate software applications, are removed.
Features
Persistence
All content in Archy is persistent. This eliminates the need for, and the concept of, saving a document after editing it. The system state is preserved and safe from crashes and power outages: if the system crashes or power goes off, one simply restarts the system and takes up working where one left off when the problem occurred.
Universal undo
A detailed history of the user's interaction allows all actions to be undone since his/her very first action performed within Archy, and re-done again up to the most recent action. Universal and unlimited undo is one key element for the design goals stated in The Humane Interface, since it allows for all the user's work to be recovered in any case.
Leaping
A main feature of the interface is Leaping, a means of moving on-screen via incremental text-search. The system provides two commands, Leap-forward and Leap-backward, invoked through dedicated keys (meant to be pressed with the thumbs), that move the cursor to the next and prior position that contains the search string. Leaping is performed as a quasimode operation: press the Leap key and, while holding it, type the text that you want to search; finally release the Leap key. This process is intended to habituate the user and turn cursor positioning into a reflex.
Leaping to document landmarks such as next or previous word, line, page, section, and document amounts to leaping to Space, New line, Page, and Document characters, which are inserted using the Spacebar, Enter, Page and Document keys respectively. On a standard computer keyboard, Archy uses the Alt keys as Leap keys, Backquote (`) as a Document character and Tilde (~) as a Page character.
The cursor can still be moved forward and back by one character using the Left and Right arrow keys, and the text can be scrolled up and down by one line using the Up and Down arrow keys. This is known as Creeping.
Commands
Another feature is intended to provide the power of a command line interface in a graphical user interface (GUI). Command names can be inserted and executed at any place in the interface. This reduces the need to move a mouse pointer to a menu bar or toolbox to execute commands, and allows for quickly composing the results of several commands in sequence.
To use a command the user types the command name while holding down the command key (the caps-lock key). Most command names are filled in automatically, so the user needs to type only until the full name appears.
Since a command can be used anywhere, applications are obsolete as the core of the interface's design. Installing a new package of commands provides a functionality related to their common task. In this way, the user is not restricted to the closed environment of a single application in order to use these functions. Rather, the API is exposed to the user so that these functions can be used system-wide and combined in ways unforeseen by the designer. Ideally, commands could be installed in the system one by one, so that users can acquire and install only what they need.
Many commands operate on selected areas of text. Selections are displayed by using a background color. Several selections can be active at once, and the color of a given old selection changes as newer selections are made. For example, to send an e-mail message, you might type and select the text of the message, type and select the address of the recipient, and invoke the SEND MAIL command.
Zoomworld
Archy's zooming user interface (ZUI) element is called Zoomworld. It is a spatial, non-windowing interface: an infinite plane expanding in all directions and zoomable to infinite detail. Extra information on an item is provided by "flying" closer to inspect it, and the destinations of hyperlinks are inserted in-place instead of being represented by textual reference. Browsing in this Zoomworld can be done with a mouse; leap functions are used as a search facility.
Archy's project developed some guidelines for Zoomworld and a working proof of concept, but the built prototype did not include code for zooming.
Project members claim that a similar, but limited, zooming interface was tested in real world applications with remarkable success. With a single minute of training, novices were competent and comfortable with the system. Computer experts reportedly took longer, since they had more preconceived expectations to unlearn. The zooming hospital information system is described in The Humane Interface, including some screen shots.
License
Archy was initially licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 2.0 License. This simply stated that "you must give the original author credit, you may not use this work for commercial purposes, and if you alter, transform, or build upon this work, you may distribute the resulting work only under a license identical to this one."
Given the "non-commercial" clause, it is not free software. In November 2017, Aza Raskin changed the license to the MIT License
Commentary
The interface and functionality of The Humane Environment was compared and found similar to the Emacs editor for its text-based interface without dialog boxes, and its reliance on incremental search and a modifier key for issuing commands. Archy provides an increased focus on learnability and an emphasis in removing modes, which are common in Emacs. The requirement for the LEAP key to be pressed while searching as a quasimode has been criticized as uncomfortable. But note that the LEAP keys in the original Canon Cat are the two large red keys below the space bar; Archy uses the two ALT keys on either side of the space bar, found on most standard keyboards, which are a compromise to using it on commonly available hardware.
See also
Ubiquity, a Firefox extension based on the same principles as Archy created by Mozilla Labs with Aza Raskin in the design team.
References
Notes
Interview with Aza Rasking about The Humane Environment project.
Raskin's notes for a film by director Jennie Bourne
External links
Archy project at Archive.org
Last? available copy of Archy
Aza Raskin explains what happened to the Archy project
Raskin's summary of the principles and design rules in "The Humane Interface"
Enso A humane interface project
seems to be a git archive of the archy source code
User interfaces
Jef Raskin | Archy (software) | Technology | 1,857 |
4,206,155 | https://en.wikipedia.org/wiki/The%20World%20Economy%3A%20Historical%20Statistics | The World Economy: Historical Statistics is a landmark book by Angus Maddison. Published in 2004 by the OECD Development Centre, it studies the growth of populations and economies across the centuries: not just the world economy as it is now, but how it was in the past.
Among other things, it showed that Europe's gross domestic product (GDP) per capita was faster progressing than the leading Asian economies since 1000 AD, reaching again a higher level than elsewhere from the 15th century, while Asian GDP per capita remained static until 1800, when it even began to shrink in absolute terms, as Maddison demonstrated in a subsequent book. At the same time, Maddison showed them recovering lost ground from the 1950s, and documents the much faster rise of Japan and East Asia and the economic shrinkage of Russia in the 1990s.
The book is a mass of statistical tables, mostly on a decade-by-decade basis, along with notes explaining the methods employed in arriving at particular figures. It is available both as a paperback book and in electronic format. Some tables are available on the official website.
See also
List of regions by past GDP (PPP) per capita
Angus Maddison statistics of the ten largest economies by GDP (PPP)
Maddison Project, a project started in March 2010 to continue Maddison's work after his death
References
External links
Angus Maddison's Homepage at the Groningen Growth and Development Centre
Official website of The World Economy
2004 non-fiction books
Demography
Economic growth
Books about economic history | The World Economy: Historical Statistics | Environmental_science | 313 |
49,977,940 | https://en.wikipedia.org/wiki/Dp-1%20holin%20family | The Bacterophase Dp-1 Holin (Dp-1 Holin) Family (TC# 1.E.24) is a family of proteins present in several Gram-positive bacteria (i.e., Enterococcus faecalis) and their phage. The genes coding for the lytic system of the pneumococcal phage, Dp-1, has been cloned and characterized. The holin of phage Dp-1 is 74 amino acyl residues (aas) long with two putative transmembrane segments (TMSs) (residues 12-32 and 39-57). The lytic enzyme of Dp-1 (Pal), an N-acetyl-muramoyl-L-alanine amidase, shows a modular organization similar to that described for the lytic enzymes of Streptococcus pneumoniae and its bacteriophage in which change in the order of the functional domains changes the enzyme specificity. A representative list of proteins belonging to the Dp-1 family can be found in the Transporter Classification Database.
See also
Holin
Lysin
Transporter Classification Database
References
Protein families
Membrane proteins
Transmembrane proteins
Transmembrane transporters
Transport proteins
Integral membrane proteins
Holins | Dp-1 holin family | Biology | 272 |
22,305,311 | https://en.wikipedia.org/wiki/Culture%20assimilators%20%28programs%29 | Culture Assimilators are culture training programs first developed at the University of Illinois in the 1960s. A team from the psychology department of that university was asked by the Office of Naval Research to develop a training method that would “make every sailor an ambassador of the United States.” The team consistent of Fred Fiedler, whose major research was the study of leadership, Charles Osgood, whose major research was on interpersonal communication, Larry Stolurow, whose major research was on the use of computers for training, and Harry Triandis, whose major research was the study of the relationship between culture and social behavior.
Culture assimilators
The team developed methods for the study of culture and social behavior (Triandis, 1972), and the information was formatted in such a way that computers could be used in the training.
The procedure started with interviews with people who had experience in two cultures, e.g., A and B. The questions asked for “episodes” or “critical incidents” that surprised and confused members of culture B when they interacted with people from culture A. Student samples from the two cultures were asked to explain why the problem or confusion occurs. When the explanations given by the students from culture A were different from the explanations given by members from culture B, there was something to teach. For example, teachers from the U.S. are annoyed that pupils from Latin America do not look at them when they talk. It was explained that in the U.S. people usually pay attention to the speaker by looking at the speaker, but in Latin America it is “insolent” to look at a high status person in the eyes: one is supposed to look down.
The format of assimilators is as follows. An episode is described (page 1) followed by 4 or 5 explanations of why there is a problem or difficulty. For example, why do people from culture A behave that way? The trainee selects the explanation that s/he thinks is best. The explanations are selected such that when people from culture B are learning about culture A most of the explanations are frequently given by people in culture B and one explanation comes from culture A. After the trainee selects an explanation s/he is asked to turn to a page (pages 2, 3, 4, 5) that gives feedback about each explanation. If the explanation selected by the trainee is incorrect, the trainee is told that this is not the best explanation, and to try another explanation. When the trainee picks the correct explanation, the feedback is extensive, describing cultural similarities and differences between cultures A and B. Assimilators that use feedback that includes culture theory, such as the differences between collectivist and individualist cultures, are especially effective. Gradually, the trainee from culture B starts thinking like the people from culture A. In a way, s/he learns to get “into the shoes” of the people from the other culture.
The result is a training program that makes people more comfortable in working in the other culture. This was tested by assigning trainees randomly to two groups. One group gets the assimilator training, and the other gets geography training, such as what are the physical features of culture A. After the training the trainees go to the other culture and the effectiveness of their interactions in that culture and the comfort they experience while they live in the other culture is measured. The results show that the assimilator training is helpful. Training does not make the trainee an ambassador, but the trained individuals have a better experience in the other culture than those who did not receive the culture assimilator training.
Further reading
Albert, R., & Triandis, H. C. (1979). Cross-Cultural training: A theoretical framework and some observations. In H. T. Trueba & C. Barnett-Mizrahi (Eds.), Bilingual multicultural education and the professional: The theory to practice. Rowley, MA: Newbury House.
Bhawuk, D. P. S., Podsiadlowski, A., Graf, J., & Triandis, H. C. (2002). Corporate Strategies for Managing Diversity in the Global Workplace. In G. R. Ferris & M. R. Buckley, & D. B. Fedor, (Eds.), Human resource management: Perspectives, context, functions, and outcomes (pp. 112–145). Englewood Cliffs, NJ: Prentice-Hall.
Brislin, R., Cushner, K., Cherrie, C. & Yong, M. (1986). Intercultural interactions: A practical guide. Beverley Hills, CA. Sage.
Fiedler, F. E., Mitchell, T., & Triandis, H. C. (1971). The culture assimilator: An approach to cross-cultural training. Journal of Applied Psychology, 55, 95-102.
Landis, D. and Bhagat, R. (1996) (Eds.) Handbook of Cross-Cultural Training. Second Edition. Thousand Oaks, CA: Sage.
Mitchell, T. R., Dossett, D. I., Fiedler, F. E., & Triandis, H. C. (1972). Culture training: Validation evidence for the culture assimilator. International Journal of Psychology, 7, 97-104.
Triandis, H. C. and Albert, R. (1987). Cross-cultural perspectives on organizational communication. In F. M. Jablin, L. L. Putnam, K. H. Roberts and L. W. Porter (Eds.), Handbook of organizational communication. Beverly Hills, Sage. pp. 264–295.
Triandis, H. C., with Vassiliou, V., Vassiliou, G., Tanaka, Y., and Shanmugam, A. V. (1972). The Analysis of Subjective Culture. New York: Wiley.
Weldon, D. E., Carlston, D. E., Rissman, A. K., Slobodin, L., & Triandis, H. C. (1975). A laboratory test of effects of culture assimilator training. Journal of Personality and Social Psychology, 32, 300-310.
University of Illinois Urbana-Champaign
Interpersonal relationships
Cross-cultural studies | Culture assimilators (programs) | Biology | 1,319 |
25,852,537 | https://en.wikipedia.org/wiki/Computer%20science%20in%20sport | Computer science in sport is an interdisciplinary discipline that has its goal in combining the theoretical as well as practical aspects and methods of the areas of informatics and sport science. The main emphasis of the interdisciplinarity is placed on the application and use of computer-based, but also mathematical techniques in sport science, aiming in this way at the support and advancement of theory and practice in sports. The reason computer science has become an important partner for sport science is mainly connected with "the fact that the use of data and media, the design of models, the analysis of systems etc. increasingly requires the support of suitable tools and concepts which are developed and available in computer science".
Historical background
Going back in history, computers in sports were used for the first time in the 1960s, when the main purpose was to accumulate sports information. Databases were created and expanded in order to launch documentation and dissemination of publications like articles or books that contain any kind of knowledge related to sports science. Until the mid-1970s also the first organization in this area called IASI (International Association for Sports Information) was formally established. Congresses and meetings were organized more often with the aim of standardization and rationalization of sports documentation. Since at that time this area was obviously less computer-oriented, specialists talk about sports information rather than sports informatics when mentioning the beginning of this field of science.
Based on the progress of computer science and the invention of more powerful computer hardware in the 1970s, also the real history of computer science in sport began. This was as well the first time when this term was officially used and the initiation of a very important evolution in sports science.
In the early stages of this area statistics on biomechanical data, like different kinds of forces or rates, played a major role. Scientists started to analyze sports games by collecting and looking at such values and features in order to interpret them. Later on, with the continuous improvement of computer hardware — in particular microprocessor speed – many new scientific and computing paradigms were introduced, which were also integrated in computer science in sport. Specific examples are modeling as well as simulation, but also pattern recognition, and design.
As another result of this development, the term 'computer science in sport' has been added in the encyclopedia of sports science in 2004.
Areas of research
The importance and strong influence of computer science as an interdisciplinary partner for sport and sport science is mainly proven by the research activities in computer science in sport. The following IT concepts are thereby of particular interest:
Data acquisition and data processing
Databases and expert systems
Modelling (mathematical, IT based, biomechanical, physiological)
Simulation (interactive, animation etc.)
Presentation
Based on the fields from above, the main areas of research in computer science in sport include amongst others:
Training and coaching
Biomechanics
Sports equipment and technology
Computer-aided applications (software, hardware) in sports
Ubiquitous computing in sports
Multimedia and Internet
Documentation
Education
Research communities
A clear demonstration for the evolution and propagation towards computer science in sport is also the fact that nowadays people do research in this area all over the world. Since the 1990s, many new national and international organizations regarding the topic of computer science in sport were established. These associations are regularly organizing congresses and workshops with the aim of dissemination as well as exchange of scientific knowledge and information on all sort of topics regarding the interdisciplinary discipline.
Historical survey
As a first example, in Australia and New Zealand scientists have built up the MathSport group of ANZIAM (Australia and New Zealand Industrial and Applied Mathematics), which since 1992 organizes biennial meetings, initially under the name "Mathematics and Computers in Sport Conferences", and now "MathSport". Main topics are mathematical models and computer applications in sports, as well as coaching and teaching methods based on informatics.
The European community was also amongst the leading motors of the emergence of the field. Some workshops on this topic were successfully organized in Germany since the late 1980s. In 1997 the first international meeting on computer science in sport was held in Cologne. The main aim was to spread out and share applications, ideas and concepts of the use of computers in sports, which should also make a contribution to the creation of internationalization and thus to boost research work in this area.
Since then, such international symposia took place every two years all over Europe. As the first conferences were a raving success, it was decided to go even further and the foundation of an organization was the logical consequence. This step was accomplished in 2003, when the International Association of Computer Science in Sport (IACSS) was established during the 4th international symposium in Barcelona, when Prof. Jürgen Perl was also chosen as the first president. A few years earlier, the first international e-journal on this topic (International Journal of Computer Science in Sport) was released already. The internationalization is confirmed moreover by the fact that three conferences already took place outside of Europe - in Calgary (Canada) in 2007, Canberra (Australia) in 2009 and Shanghai (China) in 2011. During the symposium in Calgary additionally, the president position changed - it has been assigned to Prof. Arnold Baca, who has been re-elected in 2009 and 2011. The following Symposia on Computer Science in Sport took place in Europe again, in Istanbul (Turkey) in 2013 and in Loughborough (UK) in 2015. In 2017 the 11th Symposium of Computer Science in Sport took place in Constance (Germany). During the conference in Istanbul Prof. Martin Lames was elected as president of the IACSS. He was re-elected in 2015, 2017 and 2019.
The 12th International Symposium of Computer Science in Sports was held in Moscow (Russia) from 8 to 10 July 2019: https://iacss2019.ru/
National organizations
In addition to the international associations from above, currently the following national associations on computer science in sport exist (if available, the web addresses are also given):
Austrian Association of Computer Science in Sport - http://www.sportinformatik.at
British Association of Computer Science in Sport and Exercise
Chinese Association of Computer Science in Sport
Croatian Association of Computer Science in Sport
Section Computer Science in Sport of the German Association of Sport Science - http://www.dvs-sportinformatik.de (in German)
Swiss Association of Computer Science in Sport SACSS - http://sacss.org
Indian Federation of Computer Science in Sport - http://www.ifcss.in
Portuguese Association of Computer Science in Sport
Turkish Association of Computer Science in Sport
Russian Association of Computer Science in Sport - https://www.racss.ru/
References
Further reading
Baca, A. (2015). Computer Science in Sport - Research and practice, Routledge.
External links
MathSport - ANZIAM (Australia and New Zealand Industrial and Applied Mathematics)
ECSS (European College of Sport Science)
Havuz Ekipmanları
ISEA (International Sports Engineering Association)
IACSS (International Association of Computer Science in Sport)
Computer science
Sports science | Computer science in sport | Technology | 1,427 |
556,228 | https://en.wikipedia.org/wiki/ArsDigita%20Community%20System | The ArsDigita Community System (ACS) was an open source toolkit for developing community web applications developed primarily by developers associated with ArsDigita Corporation. It was licensed under the terms of the GNU GPL, and is one of the most famous products to be based completely on AOLserver. Although there were several forks of the project, the only one that is still actively maintained is OpenACS.
Features of ACS included a core set of APIs, datamodels, and database routines for coordinating information common to all community web applications, as well as modules such as workflow management, CMS, messaging, bug/issue tracking, project tracking, e-commerce, and bboards.
History
ACS was built in the mid-1990s to support the photo.net online community as well as a variety of Internet services from Hearst Corporation.
Its creator, ArsDigita, was founded in 1997 by developers such as Philip Greenspun. The initial developers included Tracy Adams, Ben Adida, Eve Andersson, Jin S. Choi, Philip Greenspun, Aurelius Prochazka, and Brian Tivol.
The ACS was originally written using the Oracle database and AOLserver threaded web server and thus was a combination of SQL, HTML templates, and Tcl code to merge database results with templates. ACS 3.4, however, was also available with Java Server Pages to run with Apache and Tomcat. In 2001, the code tree was forked, with the Tcl code base being maintained and refactored by one group of developers, while the product line was being re-written in Java EE.
In 2002, Red Hat acquired ArsDigita and all of its assets. As a result of this, the Java version was renamed "Red Hat CCM", and official support for the Tcl version ceased. However, the Tcl version continued to be maintained by the OpenACS community.
OpenACS
The Open Architecture Community System provides:
A set of applications, that can be used to deploy web sites that are strong on collaboration. Some of the applications are Workflow, CMS, Messaging, Bug/Issue tracker, e-commerce, blogger, chat and forums.
An application development toolkit, that provides an extensive set of APIs and services to enable quick development of new applications. Features include permissioning, full internationalization, Ajax, form builder, object model, automated testing, subsites and a powerful package manager.
OpenACS runs on AOLserver and NaviServer with either Oracle or PostgreSQL as its database.
Projects that were or are based on OpenACS include dotLrn, dotFolio, dotCommunity, dotConsult, Project-Open, and Voice Online Communities.
See also
Web content management
Solution stack
References
External links
ACS may be downloaded from http://www.eveandersson.com/arsdigita/acs-repository/
Official website
the philosophy behind the toolkit is explained at http://philip.greenspun.com/panda/community
Free software programmed in Tcl
Free web development software
Information technology management
Publication management software
Red Hat software | ArsDigita Community System | Technology | 664 |
31,302,683 | https://en.wikipedia.org/wiki/English%20brewery%20cask%20units | Capacities of brewery casks were formerly measured and standardised according to a specific system of English units. The system was originally based on the ale gallon of . In United Kingdom and its colonies, with the adoption of the imperial system in 1824, the units were redefined in terms of the slightly smaller imperial gallon (). The older units continued in use in the United States.
Historically the terms beer and ale referred to distinct brews. From the mid 15th century until 1803 in Britain "ale" casks and "beer" casks differed in the number of gallons they contained.
Units
Tun
The beer tun is equal to double the size of a butt: it is therefore exactly or approximately .
Butt (Imperial)
The butt of beer is equal to half a tun or two hogsheads, and is therefore exactly or approximately .
Hogshead
The hogshead of beer and ale was equal to a quarter of a tun, half a butt, or three kilderkins. This unit is about 3% larger than the wine hogshead.
hogshead (Ale)
In the mid-15th century the ale hogshead was defined as 48 ale or beer gallons (221.8153 L). In 1688 the ale hogshead was redefined to be 51 ale or beer gallons (235.67875 L). In 1803 the ale hogshead was again redefined to be 54 ale or beer gallons (249.54221 L), equivalent to the beer hogshead.
hogshead (Beer)
From the mid 15th century until 1824 the beer hogshead was defined as 54 ale or beer gallons.
hogshead (Ale) (Imperial), hogshead (Beer) (Imperial)
In the United Kingdom and its colonies, with the 1824 adoption of the imperial system, the ale or beer hogshead was redefined to be 54 imperial gallons. The ale or beer hogshead is therefore exactly or approximately .
Barrel
The barrel of beer or ale was equal to two kilderkins or of a beer or ale hogshead. This is about 37% larger than the wine barrel.
barrel (Ale)
As with the hogshead, the ale barrel underwent various redefinitions. Initially 32 ale or beer gallons (147.9 L), it was redefined in 1688 as 34 ale or beer gallons (157.1 L), and again in 1803 as 36 ale or beer gallons (166.4 L).
barrel (Beer)
The beer barrel was defined as 36 ale or beer gallons until the adoption of the imperial system.
barrel (Ale) (Imperial), barrel (Beer) (Imperial)
The adoption of the imperial system saw the beer or ale barrel redefined to be 36 imperial gallons, which is exactly
or approximately .
Kilderkin
The kilderkin (from the Dutch for "small cask") is equal to half a barrel or two firkins.
kilderkin (Ale)
The ale kilderkin likewise underwent various redefinitions. Initially 16 ale or beer gallons (73.94 L), it was redefined in 1688 as 17 ale or beer gallons (78.56 L) and again in 1803 as 18 ale or beer gallons (83.18 L).
kilderkin (Beer)
Until the adoption of the imperial system the beer kilderkin was defined as 18 ale or beer gallons.
kilderkin (Ale) (Imperial), kilderkin (Beer) (Imperial)
With the adoption of the imperial system the kilderkin was redefined to be 18 imperial gallons, which is exactly or approximately .
The kilderkin is still currently used. It is the unit of choice of CAMRA, the Campaign for Real Ale, for calculating beer quantities for beer festivals in the UK. Ales are usually delivered in firkins, cider and other drinks are usually in boxes, bottles or other containers measured in gallons or litres, and all (except wine) are sold in pints or parts thereof. For CAMRA internal accounting, all are calculated in kilderkins. A kilderkin is a 144 pint container but there is not 144 pints of cask conditioned consumable beer in a kilderkin (see Firkins below for explanation).
Firkin
The ale or beer firkin (from Middle Dutch meaning "fourth") is a quarter of an ale or beer barrel or half a kilderkin. This unit is much smaller than the wine firkin. Casks in this size (themselves called firkins) are the most common container for cask ale.
firkin (Ale)
From the mid 15th century until 1688 the ale firkin was defined as 8 ale or beer gallons (36.97 litres). In 1688 the ale firkin was redefined to be ale or beer gallons (39.28 L). In 1803 ale firkin was again redefined to be 9 ale or beer gallons (41.59 L), equivalent to the beer firkin.
firkin (Beer)
From the mid 15th century until 1824 the beer firkin was defined as 9 ale or beer gallons.
firkin (Ale) (Imperial), firkin (Beer) (Imperial)
The beer or ale firkin was redefined to be 9 imperial gallons in 1824. It is therefore exactly or approximately .Most English cask conditioned beer bought by publicans is delivered in 72 pint containers (i.e. Firkin) but the volume of consumable beer in the container is far lower. For example a 72 pint container of Greene King IPA currently only has 66 "full" pints of consumable beer that can be sold or drunk, the other 6 pints are sediment, finings, beer stone, hops, proteins or less than an imperial measure and therefore not consumable or saleable. HMRC does not charge duty on any portion of beer that cannot be consumed, and brewers should make a declaration to the first customer (i.e. publican) to inform them what are the actual duty paid contents of the beer so customers are fully aware of how much is being sold to them.
Pin (Imperial)
A pin is equal to half a firkin, and is therefore exactly or approximately .
Plastic versions of these casks are known as "polypins" and are popular in homebrewing and the off-trade (deliveries for home consumption). They are also popular at beer festivals where non-standard beers are sold.
Gallon
Originally, a 282 cubic inch ale or beer gallon was used. With the adoption of the imperial system in the United Kingdom and its colonies, the system was redefined in terms of the imperial gallon from 1824.
Chart
See also
English wine cask units
List of unusual units of measurement
Units of measurement
References
Notes
Units of volume
Alcohol measurement | English brewery cask units | Mathematics | 1,384 |
37,281,071 | https://en.wikipedia.org/wiki/Boletus%20miniato-olivaceus | Boletus miniato-olivaceus is a species of bolete fungus in the family Boletaceae. Described as new to science in 1874, it is found in eastern North America, northeast Mexico and southern Brazil.
Taxonomy
The species was first described by American botanist Charles Christopher Frost in 1874, from collections made near Marlboro, Vermont. William Alphonso Murrill transferred the species to the genus Ceriomyces in 1909; this genus has since been folded into Boletus. For many years the species concept of Boletus miniato-olivaceus was unclear, and it was not definitively agreed upon which combination of characteristics separated this species from the similar B. sensibilis, or even other related American boletes such as B. bicoloroides, B. miniatopallescens, B. sensibilis and Lanmaoa carminipes. After examining the type specimens of B. miniato-olivaceus as well as several fresh specimens, Roy Halling determined that there was considerable variability in some characters, particularly in the morphology of the cystidia.
Description
The cap is initially convex before flattening out in maturity, and attains a diameter of . The cap surface is dry and smooth. The color in young specimens is red, changing gradually to pale rose-pink or rose-tan with greenish or yellowish tints in maturity. The flesh is white to pale yellow except for directly underneath the cap cuticle, where it is reddish. It has no distinctive taste or odor. When cut or injured, it turns blue, although this reaction may be slow. The pore surface on the underside of the cap is initially yellow but turns dingy olive-green (sometimes with reddish tints) when older. The angular to circular pores number about 1–2 per millimeter. The tubes comprising the hymenophore are deep. The stem is long by thick, and either equal in width through, or tapered at either end. It is solid (i.e., not hollow), dry, and colored yellow with reddish to brownish tinges, especially near the base.
Boletus miniato-olivaceus produces an olive-brown spore print. The spores are somewhat elliptical to spindle-shaped, smooth, and measure 10–15 by 4–6 μm. Fruit bodies are poisonous, and cause gastrointestinal disorders if consumed. They can be used in mushroom dyeing, and produce colors ranging from brown, beige, yellow, or light orange, depending on the mordant used.
Habitat and distribution
Boletus miniato-olivaceus is a mycorrhizal species, and has been shown in the laboratory to form a Hartig net with loblolly pine (Pinus taeda) that is typical of pine mycorrhizae in nature. In nature, the fruit bodies grow singly, scattered, or in groups on the ground. Typically habitats include deciduous or mixed forests. An uncommon species, fruiting occurs from June to October. Its distribution includes eastern Canada south to Florida, extending west to the Great Lakes region.
The bolete was reported from a Mexican beech (Fagus mexicana) forest in Hidalgo, Mexico in 2010.
See also
List of Boletus species
List of North American boletes
References
External links
mineato-olivaceus
Fungi described in 1874
Fungi of North America
Fungus species | Boletus miniato-olivaceus | Biology | 704 |
28,289,092 | https://en.wikipedia.org/wiki/Dynamical%20neuroscience | The dynamical systems approach to neuroscience is a branch of mathematical biology that utilizes nonlinear dynamics to understand and model the nervous system and its functions. In a dynamical system, all possible states are expressed by a phase space. Such systems can experience bifurcation (a qualitative change in behavior) as a function of its bifurcation parameters and often exhibit chaos. Dynamical neuroscience describes the non-linear dynamics at many levels of the brain from single neural cells to cognitive processes, sleep states and the behavior of neurons in large-scale neuronal simulation.
Neurons have been modeled as nonlinear systems for decades, but dynamical systems are not constrained to neurons. Dynamical systems can emerge in other ways in the nervous system. Chemical species models, like the Gray–Scott model, can exhibit rich, chaotic dynamics. Intraneural communication is affected by dynamic interactions between extracellular fluid pathways. Information theory draws on thermodynamics in the development of infodynamics that can involve nonlinear systems, especially with regards to the brain.
History
One of the earliest models of the neuron was based on mathematical and physical modelling: the integrate-and-fire model, which was developed in 1907. Decades later, the discovery of the squid giant axon led Alan Hodgkin and Andrew Huxley (half-brother to Aldous Huxley) to develop the Hodgkin–Huxley model of the neuron in 1952. This model was simplified with the FitzHugh–Nagumo model in 1962. By 1981, the Morris–Lecar model had been developed for the barnacle muscle.
These mathematical models proved useful and are still used by the field of biophysics today, but a late 20th century development propelled the dynamical study of neurons even further: computer technology. The largest issue with physiological equations like the ones developed above is that they were nonlinear. This made the standard analysis impossible and any advanced kinds of analysis included a number of (nearly) endless possibilities. Computers opened a lot of doors for all of the hard sciences in terms of their ability to approximate solutions to nonlinear equations. This is the aspect of computational neuroscience that dynamical systems encompasses.
In 2007, a canonical text book was written by Eugene Izhikivech called Dynamical Systems in Neuroscience, assisting the transformation of an obscure research topic into a line of academic study.
Neuron dynamics
(intro needed here)
Electrophysiology of the neuron
The motivation for a dynamical approach to neuroscience stems from an interest in the physical complexity of neuron behavior. As an example, consider the coupled interaction between a neuron's membrane potential and the activation of ion channels throughout the neuron. As the membrane potential of a neuron increases sufficiently, channels in the membrane open up to allow more ions in or out. The ion flux further alters the membrane potential, which further affects the activation of the ion channels, which affects the membrane potential, and so on. This is often the nature of coupled nonlinear equations. A nearly straight forward example of this is the Morris–Lecar model:
See the Morris–Lecar paper for an in-depth understanding of the model. A more brief summary of the Morris Lecar model is given by Scholarpedia.
In this article, the point is to demonstrate the physiological basis of dynamical neuron models, so this discussion will only cover the two variables of the equation:
represents the membrane's current potential
is the so-called "recovery variable", which gives us the probability that a particular potassium channel is open to allow ion conduction.
Most importantly, the first equation states that the change of with respect to time depends on both and , as does the change in with respect to time. and are both functions of . So we have two coupled functions, and .
Different types of neuron models utilize different channels, depending on the physiology of the organism involved. For instance, the simplified two-dimensional Hodgkins–Huxley model considers sodium channels, while the Morris–Lecar model considers calcium channels. Both models consider potassium and leak current. Note, however, that the Hodgkins–Huxley model is canonically four-dimensional.
Excitability of neurons
One of the predominant themes in classical neurobiology is the concept of a digital component to neurons. This concept was quickly absorbed by computer scientists where it evolved into the simple weighting function for coupled artificial neural networks. Neurobiologists call the critical voltage at which neurons fire a threshold. The dynamical criticism of this digital concept is that neurons don't truly exhibit all-or-none firing and should instead be thought of as resonators.
In dynamical systems, this kind of property is known as excitability. An excitable system starts at some stable point. Imagine an empty lake at the top of a mountain with a ball in it. The ball is in a stable point. Gravity is pulling it down, so it's fixed at the lake bottom. If we give it a big enough push, it will pop out of the lake and roll down the side of the mountain, gaining momentum and going faster. Let's say we fashioned a loop-de-loop around the base of the mountain so that the ball will shoot up it and return to the lake (no rolling friction or air resistance). Now we have a system that stays in its rest state (the ball in the lake) until a perturbation knocks it out (rolling down the hill) but eventually returns to its rest state (back in the lake). In this example, gravity is the driving force and spatial dimensions x (horizontal) and y (vertical) are the variables. In the Morris Lecar neuron, the fundamental force is electromagnetic and and are the new phase space, but the dynamical picture is essentially the same. The electromagnetic force acts along just as gravity acts along . The shape of the mountain and the loop-de-loop act to couple the y and x dimensions to each other. In the neuron, nature has already decided how and are coupled, but the relationship is much more complicated than the gravitational example.
This property of excitability is what gives neurons the ability to transmit information to each other, so it is important to dynamical neuron networks, but the Morris Lecar can also operate in another parameter regime where it exhibits oscillatory behavior, forever oscillating around in phase space. This behavior is comparable to pacemaker cells in the heart, that don't rely on excitability but may excite neurons that do.
Global neurodynamics
The global dynamics of a network of neurons depend on at least the first three of four attributes:
individual neuron dynamics (primarily, their thresholds or excitability)
information transfer between neurons (generally either synapses or gap junctions
network topology
external forces (such as thermodynamic gradients)
There are many combinations of neural networks that can be modeled between the choices of these four attributes that can result in a versatile array of global dynamics.
Biological neural network modeling
Biological neural networks can be modeled by choosing an appropriate biological neuron model to describe the physiology of the organism and appropriate coupling terms to describe the physical interactions between neurons (forming the network). Other global considerations must be taken into consideration, such as the initial conditions and parameters of each neuron.
In terms of nonlinear dynamics, this requires evolving the state of the system through the functions. Following from the Morris Lecar example, the alterations to the equation would be:
where now has the subscript , indicating that it is the ith neuron in the network and a coupling function has been added to the first equation. The coupling function, D, is chosen based on the particular network being modeled. The two major candidates are synaptic junctions and gap junctions.
Attractor network
Point attractors – memory, pattern completion, categorizing, noise reduction
Line attractors – neural integration: oculomotor control
Ring attractors – neural integration: spatial orientation
Plane attractors – neural integration: (higher dimension of oculomotor control)
Cyclic attractors – central pattern generators
Chaotic attractors – recognition of odors and chaos is often mistaken for random noise.
Please see Scholarpedia's page for a formal review of attractor networks.
Beyond neurons
While neurons play a lead role in brain dynamics, it is becoming more clear to neuroscientists that neuron behavior is highly dependent on their environment. But the environment is not a simple background, and there is a lot happening right outside of the neuron membrane, in the extracellular space. Neurons share this space with glial cells and the extracellular space itself may contain several agents of interaction with the neurons.
Glia
Glia, once considered a mere support system for neurons, have been found to serve a significant role in the brain. The subject of how the interaction between neuron and glia have an influence on neuron excitability is a question of dynamics.
Neurochemistry
Like any other cell, neurons operate on an undoubtedly complex set of molecular reactions. Each cell is a tiny community of molecular machinery (organelles) working in tandem and encased in a lipid membrane. These organelles communicate largely via chemicals like G-proteins and neurotransmitters, consuming ATP for energy. Such chemical complexity is of interest to physiological studies of the neuron.
Neuromodulation
Neurons in the brain live in an extracellular fluid, capable of propagating both chemical and physical energy alike through reaction-diffusion and bond manipulation that leads to thermal gradients. Volume transmission has been associated with thermal gradients caused by biological reactions in the brain. Such complex transmission has been associated with migraines.
Cognitive neuroscience
The computational approaches to theoretical neuroscience often employ artificial neural networks that simplify the dynamics of single neurons in favor of examining more global dynamics. While neural networks are often associated with artificial intelligence, they have also been productive in the cognitive sciences. Artificial neural networks use simple neuron models, but their global dynamics are capable of exhibiting both Hopfield and Attractor-like network dynamics.
Hopfield network
The Lyapunov function is a nonlinear technique used to analyze the stability of the zero solutions of a system of differential equations. Hopfield networks were specifically designed such that their underlying dynamics could be described by the Lyapunov function. Stability in biological systems is called homeostasis. Particularly of interest to the cognitive sciences, Hopfield networks have been implicated in the role of associative memory (memory triggered by cues).
See also
Computational neuroscience
Dynamicism
Mathematical biology
Nonlinear systems
Randomness
Neural oscillation
References
Branches of neuroscience
Dynamical systems
Mathematical and theoretical biology | Dynamical neuroscience | Physics,Mathematics | 2,176 |
6,782,041 | https://en.wikipedia.org/wiki/Thunder%20Board | The Thunder Board was an 8-bit mono personal computer integrated circuit sound card from Media Vision, that had Sound Blaster compatibility at a reduced price. It was widely advertised as “proudly made in the USA”; possibly a reference to the Sound Blaster, manufactured by the competing Singapore-based Creative Technologies. Emulates SB 1.0 and 1.5
Other features included:
8 Bit mono record and playback of .VOC files
Yamaha YM3812 OPL2 FM Synth
2 Watt output
Joystick Port
8 Bit ISA bus
Volume Control
Powered Output Jack
Microphone Input Jack
Sound cards | Thunder Board | Technology | 119 |
52,104,771 | https://en.wikipedia.org/wiki/Flemish%20Government%20Architect | The role of Flemish Government Architect (Vlaams Bouwmeester in Dutch) was established in 1998 under Minister-President of Flanders Van den Brande to develop Architectural Design Policy in Flanders, Belgium.
Function
The Flemish Government Architect is commissioned to develop a long-term spatial vision, in consultation with the different administrations and with external stakeholders, to contribute to the preparation and implementation of the architecture policy of the Flemish government. The goal of this independent body in the government is to help create a high quality architectural living environment in Flanders, which was inspired by the Chief Government Architect of the Netherlands.
One of the main tasks of the Flemish Government Architect is selecting designers for public contracts. The Open Call is a list of public projects published twice per year to which designers can apply.
Overview of Flemish Government Architects
2020-present Erik Wieërs
2016 - 2020 Leo Van Broeck
2015 - 2016 (acting) Stefan Devoldere
2010 - 2015 Peter Swinnen
2005 - 2010 Marcel Smets
1999 - 2005 Bob Van Reeth
Awards
Prijs Wivina Demeester (previous: Prijs Bouwheer / Prijs Bouwmeester)
External links
Flemish Government Architect
Brussels Bouwmeester
Charleroi Bouwmeester
Antwerpse Stadsbouwmeester
Stadsbouwmeester Gent
Rijksbouwmeester Nederland
1998 establishments in Belgium
Architecture in Belgium
Architecture occupations | Flemish Government Architect | Engineering | 280 |
38,397,766 | https://en.wikipedia.org/wiki/HD%2050235 | HD 50235 is a class K5III (orange giant) star located approximately 811 light years away, in the constellation Puppis. Its apparent magnitude is 4.99. HD 50235 made its closest approach to the Sun 7.8 million years ago, at the distance of 137 light years, during which it had an apparent magnitude of 1.13.
References
Puppis
K-type giants
CD-34 3140
032855
2549
050235 | HD 50235 | Astronomy | 97 |
13,363,621 | https://en.wikipedia.org/wiki/Lundquist%20number | In plasma physics, the Lundquist number (denoted by ) is a dimensionless ratio which compares the timescale of an Alfvén wave crossing to the timescale of resistive diffusion. It is a special case of the magnetic Reynolds number when the Alfvén velocity is the typical velocity scale of the system, and is given by
where is the typical length scale of the system, is the magnetic diffusivity and is the Alfvén velocity of the plasma.
High Lundquist numbers indicate highly conducting plasmas, while low Lundquist numbers indicate more resistive plasmas. Laboratory plasma experiments typically have Lundquist numbers between , while in astrophysical situations the Lundquist number can be greater than . Considerations of Lundquist number are especially important in magnetic reconnection.
See also
Magnetic Prandtl number
Péclet number
Stuart number
References
Plasma parameters | Lundquist number | Physics | 172 |
4,997,547 | https://en.wikipedia.org/wiki/Cope%27s%20gray%20treefrog | Cope's gray treefrog (Dryophytes chrysoscelis) is a species of treefrog found in the United States and Canada. It is almost indistinguishable from the gray treefrog (Dryophytes versicolor), and shares much of its geographic range. Both species are variable in color, mottled gray to gray-green, resembling the bark of trees. These are treefrogs of woodland habitats, though they will sometimes travel into more open areas to reach a breeding pond. The only readily noticeable difference between the two species is the mating call — Cope's has a faster-paced and slightly higher-pitched call than D. versicolor. In addition, D. chrysoscelis is reported to be slightly smaller, more arboreal, and more tolerant of dry conditions than D. versicolor.
Taxonomy
Edward Drinker Cope described the species in 1880. The specific name, chrysoscelis, is from Greek chrysos, gold, and scelis, leg.
Microscopic inspection of the chromosomes of D. chrysoscelis and D. versicolor reveals differences in chromosome number. D. chrysoscelis is diploid, having two complete sets of chromosomes, the usual condition in vertebrates. D. versicolor is tetraploid, having double the usual number of chromosomes. Generally, D. versicolor is believed to have evolved from D. chrysoscelis in the last major ice age, when areas of extremely low temperatures divided populations. Despite currently sharing habitat, the two species generally do not interbreed.
D. chrysoscelis is known to be largely intersterile with D. versicolor but there may be a limited amount of interfertility in sympatry. To enforce speciation there may be unknown mechanisms of reinforcement deployed between these species and further research may be fruitful.
Description
Both D. chrysoscelis and D. versicolor have black-marked bright orange to yellow patches on their hind legs, which distinguishes them from other treefrogs, such as D. avivoca. The bright-yellow pattern is normally hidden, but exposed when the frog leaps. This "flash pattern" likely serves to startle a predator as the frog makes its escape. The pattern and color variations of skin for this species will change depending on the environment they are found in. Similar hidden bright patterns are common in various Lepidoptera, for instance moths of the genus Catocala. Both species of gray treefrogs are slightly sexually dimorphic. Males have black or gray throats in the breeding season, while the throats of the females are lighter. Usually, the younger frogs in this species will often be seen more with the greenish color throughout the breeding seasons. As they age they will lose the greenish color and move towards the distinct gray color.
Skin secretions from this species may be irritating or toxic to mouth, eyes, other mucous membranes.
Distribution and habitat
The range of D. chrysoscelis is more southerly; it is apparently the species found in the lower elevation Piedmont and Coastal Plain of Virginia and the Carolinas. In those areas, D. versicolor may be present only in the Appalachians. While this species is most abundant in the southeast, it can be found as far north as Manitoba. D. chrysoscelis has also been observed to practice freeze tolerance in a lab setting, which could help it survive in cold climates. These frogs are one of the very few that can mobilize glycerol as a cryoprotectant. Glycerol production is low when the temperature is warmer, but when it gets colder, the glycerol in the body is rapidly produced. When studying ice concentration of overwintering frogs, 40-50% of total body water was frozen. Studies have revealed that Cope's gray treefrog could be more resilient to climate change in the long-term, though populations may suffer short-term drawbacks. Either way, distribution will hopefully change little in the long-term because of this. They prefer to perch on pipes located along the edges of wetlands and close to trees, which suggests that the terrestrial habitat surrounding wetlands is an important component of the species habitat. The bird-voiced treefrog, D. avivoca, is similar to D. chrysoscelis and D. versicolor, but is smaller (25–50 mm in length vs 32–62 mm for the gray treefrog).
Behavior
In the Southeastern United States, Cope's gray treefrog breeds and calls from May to August. Isolated males start calling from woodland areas during warm weather a week or more before migrating to temporary ponds to breed. There they form aggregations (choruses) and call together. Chorusing is most frequent at night, but individuals often call during daytime in response to thunder or other loud noises. These individual calls are produced at high sound pressure levels (SPLs) reaching 85 to 90 dB and sustained noise levels in choruses commonly range between 70 and 80 dB SPL. Female treefrogs have been found to be able to differentiate calls at scales of up to a few decibels. Females prefer calls with average frequencies over calls with frequencies that were 2 or 3 semitones lower than the population mean. Eggs are laid in batches of 10 to 40 on the surfaces of shallow ponds and other small bodies of water. These temporary bodies of water usually lack fish, and females preferentially lay their eggs in water bodies that lack fish or other predatory vertebrates and have lower desiccation risk. Eggs hatch in about five days and metamorphosis takes place at about 45–65 days.
The diet of Cope's gray treefrog primarily consists of insects such as moths, mites, spiders, plant lice, and harvestmen. Snails have also been observed as a food source. Like most frogs, Dryophytes chrysocelis is an opportunistic feeder and may also eat smaller frogs, including other treefrogs. Once the breeding season is over, Cope's gray treefrogs will forage continuously until winter.
Cope's gray treefrog exhibits freeze tolerance. Dryophytes chrysoscelis is capable of surviving temperatures as low as . They can withstand the physiological challenges of corporeal freezing, by accumulating cryoprotective compounds of hepatic origin, including glycerol, urea, and glucose.
References
Further reading
External links
Hyla chrysoscelis. Amphibiaweb. Accessed 2 June 2013.
Hyla chrysoscelis. NatureServe. Accessed 2 June 2013.
Dryophytes
Cryozoa
Tree Frog, Grey
Articles containing video clips
Extant Pleistocene first appearances
Amphibians described in 1880
Taxa named by Edward Drinker Cope | Cope's gray treefrog | Chemistry | 1,438 |
28,541,067 | https://en.wikipedia.org/wiki/Vapochromism | In chemistry, Vapochromism strongly overlaps with solvatochromism since vapochromic systems are ones in which dyes change colour in response to the vapour of an organic compound or gas. Vapochromic devices are the optical branch of electronic noses. The main applications are in sensors for detecting volatile organic compounds (VOCs) in a variety of environments, including industrial, domestic and medical areas.
An example of such a device is an array consisting of a metalloporphyrin (Lewis acid), a pH indicator dye and a solvatochromic dye. The array is scanned with a flat-bed recorder, and the result are compared with a library of known VOCs. Vaporchromic materials are sometimes Pt or Au complexes, which undergo distinct color changes when exposed to VOCs.
References
Chromism
Spectroscopy | Vapochromism | Physics,Chemistry,Materials_science,Astronomy,Engineering | 177 |
57,307,291 | https://en.wikipedia.org/wiki/Benny%20Moldovanu | Benny Moldovanu (born April 11, 1962) is a German economist who currently holds the Chair of Economic Theory II at the University of Bonn. His research focuses on applied game theory, auction theory, mechanism design, contests and matching theory, and voting theory. In 2004, Moldovanu was awarded the Gossen Prize for his contributions to auction theory and mechanism design.
Biography
Benny Moldovanu earned a BSc and MSc in mathematics from the Hebrew University of Jerusalem in 1986 and 1989, respectively, the latter under the supervision of Bezalel Peleg. He then obtained in 1991 a PhD in economics from the University of Bonn, with future Nobel Memorial Prize winner Reinhard Selten as advisor and Avner Shaked as co-advisor, with thesis "Game theory, economics, social and behavioral sciences". He went on to earn his habilitation from the same university in 1995. Having worked as assistant professor of economics at the University of Bonn after his PhD (1991–1995), he then became full professor at the University of Mannheim (1995–2002) before returning to the University of Bonn in 2002, where he has worked ever since. At Bonn, he has been the Co-Director and later Academic Director of the Bonn Graduate School of Economics (2006–2013) as well as Co-Director of the Hausdorff Center for Mathematics (2006–2013), where he today leads the research area on mechanism design and game theory. Moreover, at Bonn, Moldovanu is currently Director of the Institute of Microeconomics (since 2012) as well as of the Reinhard Selten Institute for Research in Economics (since 2017). Throughout his professional career, Moldovanu has held visiting appointments at the University of Michigan, Ann Arbor, Northwestern University, University College London, Yale University, Tel Aviv University, and the Hebrew University of Jerusalem. In terms of professional activities, he has been a member of the Councils of the European Economic Association and Game Theory Society, is a research fellow at the Centre for Economic Policy Research (CEPR), and has chaired the Scientific Committees of the Econometric Society and German Economic Association. Finally, he has performed editorial duties for Econometrica, Journal of the European Economic Association, Games and Economic Behavior, Journal of Economic Theory, and Economic Policy.
Research
Benny Moldovanu's research focuses on applied game theory, auction theory, mechanism design, contests and matching theory, and voting theory. In his research, he has particularly often collaborated with Philippe Jehiel (Paris School of Economics). According to IDEAS/RePEc, he belongs to the top 3% of economists in terms of research output. In particular, his research has been recognized with the Max Planck Research Prize (2001) and Gossen Prize (2004) as well as fellowships of the Econometric Society (2004), European Economic Association (2009), and Game Theory Society (2017).
Research on auctions
One major area of Moldovanu's research concerns auction theory, in particular the optimal design of auctions if participation in it subjects (some) participants to externalities. For example, in a study of economic interactions under identity-dependent, asymmetric negative externalities with Philippe Jehiel, Moldovanu finds that some agents' best strategy is to not participate in the market in order to minimize externalities, which may e.g. explain certain features of preemptive patenting. Similarly, Moldovanu, Jehiel and Ennio Stacchetti find that for such economic transactions, e.g. the sale of nuclear weapon, the outside options and participations constraints in a revenue-maximizing auction are endogenous, surplus can be extracted from non-acquiring participants, and the seller may be better off by not selling at all (while obtaining some payments) if externalities are much larger than valuations. Later, Moldovanu, Jehiel and Stacchetti have provided a general theory for the design of incentive compatible mechanisms in auctions with buyer-specific externalities. Moreover, Moldovanu and Jehiel have shown that multi-object auctions cannot be reduced to one-dimensional models without loss of generality because, in the presence of informational and allocative externalities, Bayes-Nash incentive compatible mechanisms exist only if private and social rates of information substitution are congruent, which in turn depends on whether signals are mono- or multi-dimensional. Finally, together with Jehiel, Moritz Meyer-ter-Vehn and William R. Zame, Moldovanu has explored the limits of ex post implementation, which requires each agents' strategy to be optimal for every possible realization of other agents' types.
Research on contests and matching
Another major area of Moldovanu's research regards the design of contests and assortative matching. Studying the optimal allocation of prizes in contests with multiple, nonidentical prizes, private information about participants' cost of effort and prize allocation based on effort together with Aner Sela, Moldovanu finds that the allocation of the prize sum which maximizes expected total effort depends on participants' cost functions: if they are convex, several positive prizes may be optimal, otherwise allocating the entire prize sum to a single "first" prize is optimal. In another study with Sela on the architecture of contests, Moldovanu shows that the optimal split of contest participants among tournament-style sub-contests depends on the type of effort maximized and (again) on participants' effort cost functions: if they are linear, then expected total effort is maximized through a single static contest and expected highest effort is maximized through a two-stage contest with two sub-contests (assuming sufficient participants); but if they are convex, effort may be maximized through several sub-contests or the award of prizes to all finalists. If, however, contestants care about their relative positioning into status strata, Moldovanu, Sela and Xianwen Shi find that the optimal partition in status categories depends on the distribution of ability among contests, though the top status category always only contains a single winner; in particular, assuming a concave distribution, a partition with only two strata would already be optimal. Finally, together with Sela and Heidrun Hoppe, Moldovanu has explored the assortative matching of a finite number of agents in two-sided markets under incomplete information on the basis of costly signals.
Selected publications
Gershkov, A., Moldovanu, A. (2014). Dynamic Allocation and Pricing: A Mechanism Design Approach. Cambridge, MA: MIT Press.
References
1962 births
Living people
Hebrew University of Jerusalem alumni
University of Bonn alumni
Game theorists
German economists
Academic staff of the University of Mannheim
Academic staff of the University of Bonn
Fellows of the Econometric Society
Fellows of the European Economic Association | Benny Moldovanu | Mathematics | 1,373 |
70,985,868 | https://en.wikipedia.org/wiki/Pacman%20%28security%20vulnerability%29 | Pacman is a side-channel vulnerability in certain ARM CPUs that was made public by Massachusetts Institute of Technology security researchers on June 10, 2021. It affects the pointer authentication (PAC) mechanism in many ARMv8.3 chips, including Apple's M1 CPU. Pacman creates an 'oracle' that lets an attacker guess a pointer's PAC signature without crashing the program if the guess is wrong. PAC signatures are typically less than 16 bits wide, so an attacker can use the oracle to guess the signature in 216 tries or fewer. It is unfixable without hardware changes because it is caused by the inherent design of CPU caches and branch predictors.
Impact and response
Pacman alone is not an exploitable vulnerability. PAC is a 'last line of defense' that detects when software running on the CPU is being exploited by a memory corruption attack and reacts by crashing the software before the attacker completes their exploit. Apple stated that they did not believe the vulnerability posed a serious threat to users because it requires specific conditions to be exploited.
Background
Pacman is similar to Spectre, abusing two key CPU optimizations to create a PAC oracle: branch prediction and memory caching.
PAC (Pointer Authentication Codes)
PAC is a security feature in ARMv8.3-based computer processors that mitigates against return-oriented programming by adding a cryptographic signature to the upper bits of pointers. Compilers emit PAC 'sign' instructions before storing pointers to memory, and 'verify' instructions after loading pointers from memory. If an attacker tampers with the pointer, the signature becomes invalid and the program crashes when the pointer is next accessed. PAC signatures are not cryptographically secure because they need to be small enough to fit into the unused upper bits of pointers. Therefore, if an attacker can reliably test whether a guessed signature is correct without crashing the program, they can brute-force the correct signature.
Branch prediction
Modern CPUs employ branch prediction to reduce the number of pipeline stalls caused by conditional branches. Branch prediction uses heuristics to guess the direction of a conditional branch and begin executing the predicted path – while the condition is still being evaluated. Instructions executed during this period are 'speculative', and the CPU holds their results in the re-order buffer (ROB) without writing them back to memory. Once the CPU finishes evaluating the condition and determines that its initial prediction was correct, it 'retires' the instructions in the ROB by writing their changes back to memory and propagating any exceptions produced. If the speculation was incorrect, the CPU flushes the ROB and resumes execution at the correct location.
Memory caching
CPU caches accelerate memory accesses by caching frequently accessed memory on the CPU die. This lowers the cost of memory accesses from hundreds of cycles to fewer than 10, by reducing the amount of time spent communicating with the physically separate northbridge and RAM chip. When an uncached address is loaded, the CPU immediately stashes the loaded data into the cache, evicting another entry if the cache is full. These changes are not held in the ROB because the presence or absence of an address in the cache is considered 'unobservable', so stashes and evictions that occur during speculative execution persist after the ROB has been flushed, even if that path was not ultimately taken.
Mechanism
Principle
Pacman tricks the CPU into checking the validity of a guessed PAC signature within a mispredicted branch so that exceptions produced by potentially incorrect guesses are discarded during the ROB flush. If the guess was incorrect, the exception thrown during speculative execution forces the CPU to stall, preventing further instructions from being speculatively executed. A Pacman gadget is a sequence of instructions of the following form:
if (condition):
ptr = verify(attacker_tamperable_pointer)
load(ptr)
Sequences of this form are common and can be found in most compiled programs supporting PAC. When the CPU mispredicts the condition, it begins speculatively executing the PAC verification instruction. If the attacker's guess was correct, the verification instruction succeeds and the CPU proceeds to load the address from memory; if the guess was incorrect, the verification instruction throws an exception. This exception is held in the ROB and then discarded once the CPU finds the condition to be false. The attacker then uses a hardware side-channel to determine whether the load instruction was executed, therefore determining whether their guessed signature was correct.
Attack
Ravichandran et al. demonstrate that the cache-based Prime and Probe technique can be used to determine whether the load instruction executed. The attacker determines if the load instruction in a Pacman gadget was executed by filling the cache with data, calling the gadget, and checking the latency of accessing the previously loaded addresses. If one of the addresses takes longer than before, it was evicted by the gadget and the attacker knows that their guess was correct. The attacker may then use this forged pointer elsewhere in the program to hijack it.
1. Train
The attacker calls the Pacman gadget many times with condition = true. The branch predictor is now trained to guess that the condition is true on subsequent calls. During this period, attacker_tamperable_pointer is its original value with a valid PAC signature.
2. Prime
The attacker fills the L1 cache by loading from addresses they control. The contents of these memory locations does not matter – the attacker just needs to be able to precisely measure their access latency.
3. Evict
The attacker overwrites attacker_tamperable_pointer with their target pointer and guess for the target pointer's PAC signature. They then call the Pacman gadget with condition = false, causing the branch to be mispredicted. The branch predictor will speculatively execute the contents of the if statement, before eventually flushing the pipeline and rolling back.
During this speculative execution, two things can occur:
The speculative execution proceeds to the load() instruction. This means that the verify() instruction did not fault, implying the guessed signature was correct. The load() instruction will then load the target pointer into cache, evicting an address in the attacker's eviction set.
Speculative execution faults on the verify instruction, preventing execution of the load(). This implies the guessed signature was wrong. Since this was speculatively executed within a mispredicted branch, the fault is not propagated to the program.
4. Probe
The attacker measures the access time for each element in their eviction set. If one of the elements was evicted (i.e., the access is slow) then the guess was correct. If none of the elements were evicted (i.e., all accesses are fast) then the guess was wrong. This process can be repeated with different guesses until the correct signature is found.
Notes
References
See also
Side-channel attack
External links
Transient execution CPU vulnerabilities
2022 in computing
ARM architecture | Pacman (security vulnerability) | Technology | 1,424 |
3,124,369 | https://en.wikipedia.org/wiki/Simple%20Features | Simple Features (officially Simple Feature Access) is a set of standards that specify a common storage and access model of geographic features made of mostly two-dimensional geometries (point, line, polygon, multi-point, multi-line, etc.) used by geographic databases and geographic information systems.
It is formalized by both the Open Geospatial Consortium (OGC) and the International Organization for Standardization (ISO).
The ISO 19125 standard comes in two parts. Part 1, ISO 19125-1 (SFA-CA for "common architecture"), defines a model for two-dimensional simple features, with linear interpolation between vertices, defined in a hierarchy of classes; this part also defines representation of geometry in text and binary forms. Part 2 of the standard, ISO 19125-2 (SFA-SQL), defines a "SQL/MM" language binding API for SQL under the prefix "ST_". The open access OGC standards cover additionally APIs for CORBA and OLE/COM, although these have lagged behind the SQL one and are not standardized by ISO. There are also adaptations to other languages covered below.
The ISO/IEC 13249-3 SQL/MM Spatial extends the Simple Features data model, originally based on straight-line segments, adding circular interpolations (e.g. circular arcs) and other features like coordinate transformations and methods for validating geometries, as well as Geography Markup Language support.
Details
Part 1
The geometries are associated with spatial reference systems. The standard also specifies attributes, methods and assertions with the geometries, in the object-oriented style. In general, a 2D geometry is simple if it contains no self-intersection. The specification defines DE-9IM spatial predicates and several spatial operators that can be used to generate new geometries from existing geometries.
Part 2
Part 2 is a SQL binding to Part 1, providing a translation of the interface to non-object-oriented environments. For example, instead of a someGeometryObject.isEmpty() as in Part 1, SQL/MM uses a ST_IsEmpty(...) function in SQL.
Spatial
The spatial extension adds the datatypes "Circularstring", "CompoundCurve", "CurvePolygon", "PolyhedralSurface", the last of which is also included into the OGC standard. It also defines the SQL/MM versions of these types and operations on them.
Implementations
Direct implementations of Part 2 (SQL/MM) include:
MySQL Spatial Extensions. Up to MySQL 5.5, all of the functions that calculate relations between geometries are implemented using bounding boxes not the actual geometries. Starting from version 5.6, MySQL offers support for precise object shapes.
MonetDB/GIS extension for MonetDB.
PostGIS extension for PostgreSQL, also supporting some of the SQL/MM Spatial features.
SpatiaLite extension for SQLite
Oracle Spatial, which also implements some of the advanced features from SQL/MM Spatial.
IBM Db2 Spatial Extender and IBM Informix Spatial DataBlade.
Microsoft SQL Server since version 2008, with significant additions in the 2012 version.
SAP Sybase IQ.
SAP HANA as of 1.0 SPS6.
Adaptations include:
Implementations of the CORBA and OLE/COM interfaces detailed above are mainly produced by commercial vendors maintaining legacy technology.
R: The sf package implements Simple Features and contains functions that bind to GDAL for reading and writing data, to GEOS for geometrical operations, and to PROJ for projection conversions and datum transformations.
The GDAL library implements the Simple Features data model in its OGR component.
The Java-based deegree framework implements SFA (part 1) and various other OGC standards.
The Rust library geo_types implements geometry primitives that adhere to the simple feature access standards.
GeoSPARQL is an OGC standard that is intended to allow geospatially-linked data representation and querying based on RDF and SPARQL by defining an ontology for geospatial reasoning supporting a small Simple Features (as well as DE-9IM and RCC8) RDFS/OWL vocabulary for GML and WKT literals.
As of 2012, various NoSQL databases had very limited support for "anything more complex than a bounding box or proximity search".
See also
DE-9IM
Well-known text
Well-known binary
References
External links
Simple Features SWG
Standard documents
ISO/IEC:
ISO 19125-1:2004 Geographic information -- Simple feature access -- Part 1: Common architecture
ISO 19125-2:2004 Geographic information -- Simple feature access -- Part 2: SQL option
OpenGIS
OpenGIS Implementation Specification for Geographic information - Simple feature access - Part 1: Common architecture (05-126, 06-103r3, 06-103r4), current version 1.2.1
OpenGIS Simple Feature Access - Part 2: SQL Option (99-054, 05-134, 06-104r3, 06-104r4), current version 1.2.1, formerly OpenGIS Simple Features [Implementation Specification] for SQL
OpenGIS Simple Features Implementation Specification for CORBA (99-054), current version 1.0
OpenGIS Simple Features Implementation Specification for OLE/COM (99-050), current version 1.1
Geographic information systems
Open Geospatial Consortium
ISO/TC 211
Spatial database management systems | Simple Features | Technology | 1,150 |
35,199,827 | https://en.wikipedia.org/wiki/Heegner%27s%20lemma | In mathematics, Heegner's lemma is a lemma used by Kurt Heegner in his paper on the class number problem. His lemma states that if
is a curve over a field with a4 not a square, then it has a solution if it has a solution in an extension of odd degree.
References
Diophantine equations
Lemmas in number theory | Heegner's lemma | Mathematics | 79 |
9,449,652 | https://en.wikipedia.org/wiki/When%20Engineering%20Fails | When Engineering Fails is a 1998 film written and presented by Henry Petroski. It examines the causes of major disasters, including the explosion of the Space Shuttle Challenger, and compares the risks of computer-assisted design with those of traditional engineering methods. The original title of the film was To Engineer Is Human, the title of Petroski's non-fiction book about design failures.
References
1998 films
Documentary films about technology
1998 documentary films
American documentary films
Mechanical failure
1990s American films | When Engineering Fails | Materials_science,Engineering | 97 |
36,633,800 | https://en.wikipedia.org/wiki/Robbins%27%20theorem | In graph theory, Robbins' theorem, named after , states that the graphs that have strong orientations are exactly the 2-edge-connected graphs. That is, it is possible to choose a direction for each edge of an undirected graph , turning it into a directed graph that has a path from every vertex to every other vertex, if and only if is connected and has no bridge.
Orientable graphs
Robbins' characterization of the graphs with strong orientations may be proven using ear decomposition, a tool introduced by Robbins for this task.
If a graph has a bridge, then it cannot be strongly orientable, for no matter which orientation is chosen for the bridge there will be no path from one of the two endpoints of the bridge to the other.
In the other direction, it is necessary to show that every connected bridgeless graph can be strongly oriented. As Robbins proved, every such graph has a partition into a sequence of subgraphs called "ears", in which the first subgraph in the sequence is a cycle and each subsequent subgraph is a path, with the two path endpoints both belonging to earlier ears in the sequence. (The two path endpoints may be equal, in which case the subgraph is a cycle.) Orienting the edges within each ear so that it forms a directed cycle or a directed path leads to a strongly connected orientation of the overall graph.
Related results
An extension of Robbins' theorem to mixed graphs by shows that, if is a graph in which some edges may be directed and others undirected, and contains a path respecting the edge orientations from every vertex to every other vertex, then any undirected edge of that is not a bridge may be made directed without changing the connectivity of . In particular, a bridgeless undirected graph may be made into a strongly connected directed graph by a greedy algorithm that directs edges one at a time while preserving the existence of paths between every pair of vertices; it is impossible for such an algorithm to get stuck in a situation in which no additional orientation decisions can be made.
Algorithms and complexity
A strong orientation of a given bridgeless undirected graph may be found in linear time by performing a depth-first search of the graph, orienting all edges in the depth-first search tree away from the tree root, and orienting all the remaining edges (which must necessarily connect an ancestor and a descendant in the depth-first search tree) from the descendant to the ancestor. Although this algorithm is not suitable for parallel computers, due to the difficulty of performing depth-first search on them, alternative algorithms are available that solve the problem efficiently in the parallel model. Parallel algorithms are also known for finding strongly connected orientations of mixed graphs.
Applications
Robbins originally motivated his work by an application to the design of one-way streets in cities. Another application arises in structural rigidity, in the theory of grid bracing. This theory concerns the problem of making a square grid, constructed from rigid rods attached at flexible joints, rigid by adding more rods or wires as cross bracing on the diagonals of the grid. A set of added rods makes the grid rigid if an associated undirected graph is connected, and is doubly braced (remaining rigid if any edge is removed) if in addition it is bridgeless. Analogously, a set of added wires (which can bend to reduce the distance between the points they connect, but cannot expand) makes the grid rigid if an associated directed graph is strongly connected. Therefore, reinterpreting Robbins' theorem for this application, the doubly braced structures are exactly the structures whose rods can be replaced by wires while remaining rigid.
Notes
References
.
.
.
.
.
.
.
.
.
.
Graph connectivity
Theorems in graph theory | Robbins' theorem | Mathematics | 758 |
23,714,114 | https://en.wikipedia.org/wiki/C6H9NO | {{DISPLAYTITLE:C6H9NO}}
The molecular formula C6H9NO may refer to:
2-Acetyl-1-pyrroline
Carbapenam
N-Vinylpyrrolidone
Molecular formulas | C6H9NO | Physics,Chemistry | 52 |
2,265,316 | https://en.wikipedia.org/wiki/Queen%27s%20metal | Queen's Metal, an alloy of nine parts tin and one each of antimony, lead, and bismuth, is intermediate in hardness between pewter and britannia metal. It was developed by English pewtersmiths in the 16th century; the recipe was initially a secret and was reserved for pieces made for the English royal family.
References
Fusible alloys
Tin alloys
Lead alloys
Antimony alloys
Bismuth alloys | Queen's metal | Chemistry,Materials_science | 85 |
27,675,483 | https://en.wikipedia.org/wiki/Bisphenol%20AF | Bisphenol AF (BPAF) is a fluorinated organic compound that is an analogue of bisphenol A in which the two methyl groups are replaced with trifluoromethyl groups. It exists as a white to light-gray powder.
Biological and Chemical Action
Bisphenol AF is an endocrine disrupting chemical.
Whereas BPA binds with human estrogen-related receptor gamma (ERR-γ), BPAF all but ignores ERR-γ. Instead, BPAF activates ERR-α and binds to and disables ERR-β.
The chemical shifts in 1H, 13C and 19F NMR spectroscopy are given in the literature.
Applications
Bisphenol AF is used as a crosslinking agent for certain fluoroelastomers and as a monomer for polyimides, polyamides, polyesters, polycarbonate copolymers and other specialty polymers. Polymers containing Bisphenol AF are useful in specialties such as high-temperature composites and electronic materials. Industries include cosmetics, chemical manufacturing, production of metals and rubber. It can also be a plastic additive.
See also
Bisphenol A
Bisphenol S
References
2,2-Bis(4-hydroxyphenyl)propanes
Endocrine disruptors
Trifluoromethyl compounds
Plasticizers | Bisphenol AF | Chemistry | 286 |
26,974,086 | https://en.wikipedia.org/wiki/Benzazocine | Benzazocine, also known as benzoazocine, is a chemical compound. It consists of a benzene ring bound to an azocine ring. A related compound is benzomorphan.
See also
Azocine
Benzomorphan | Benzazocine | Chemistry | 53 |
4,894,414 | https://en.wikipedia.org/wiki/Flash%20ADC | A flash ADC (also known as a direct-conversion ADC) is a type of analog-to-digital converter that uses a linear voltage ladder with a comparator at each "rung" of the ladder to compare the input voltage to successive reference voltages. Often these reference ladders are constructed of many resistors; however, modern implementations show that capacitive voltage division is also possible. The output of these comparators is generally fed into a digital encoder, which converts the inputs into a binary value (the collected outputs from the comparators can be thought of as a unary value).
Benefits and drawbacks
Flash converters are high-speed compared to many other ADCs, which usually narrow in on the correct answer over a series of stages. Compared to these, a flash converter is also quite simple and, apart from the analog comparators, only requires logic for the final conversion to binary.
For best accuracy, a track-and-hold circuit is often inserted in front of an ADC input. This is needed for many ADC types (like successive approximation ADC), but for flash ADCs, there is no real need for this because the comparators are the sampling devices.
A flash converter requires many comparators compared to other ADCs, especially as the precision increases. For example, a flash converter requires comparators for an n-bit conversion. The size, power consumption, and cost of all those comparators make flash converters generally impractical for precisions much greater than 8 bits (255 comparators). In place of these comparators, most other ADCs substitute more complex logic and/or analog circuitry that can be scaled more easily for increased precision.
Implementation
Flash ADCs have been implemented in many technologies, varying from silicon-based bipolar (BJT) and complementary metal–oxide FETs (CMOS) technologies to rarely used III-V technologies. This type of ADC is often used as a first medium-sized analog circuit verification.
The earliest implementations consisted of a reference ladder of well-matched resistors connected to a reference voltage. Each tap at the resistor ladder is used for one comparator, possibly preceded by an amplification stage, and thus generates a logical 0 or 1 depending on whether the measured voltage is above or below the reference voltage of the voltage divider. The reason to add an amplifier is twofold: it amplifies the voltage difference. It, therefore, suppresses the comparator offset and the kick-back noise of the comparator towards the reference ladder is also strongly suppressed. Typically designs from 4-bit up to 6-bit and sometimes 7-bit are produced.
Designs with power-saving capacitive reference ladders have been demonstrated. In addition to clocking the comparator(s), these systems also sample the reference value on the input stage. As the sampling is done at a very high rate, the leakage of the capacitors is negligible.
Recently, offset calibration has been introduced into flash ADC designs. Instead of high-precision analog circuits (which increase the component size to suppress variation), comparators with relatively large offset errors are measured and adjusted. Then, a test signal is applied, and the offset of each comparator is calibrated to below the LSB value of the ADC.
Another improvement to many flash ADCs is the inclusion of digital error correction. When the ADC is used in harsh environments or constructed from very small integrated circuit processes, there is a heightened risk that a single comparator will randomly change state resulting in a wrong code. Bubble error correction is a digital correction mechanism that prevents a comparator that has, for example, tripped high from reporting logic high if it is surrounded by comparators that are reporting logic low.
Folding ADC
The number of comparators can be reduced somewhat by adding a folding circuit in front, making a so-called folding ADC. Instead of using the comparators in a flash ADC only once, the folding ADC re-uses the comparators multiple times during a ramp input signal. If a m-times folding circuit is used in an n-bit ADC, the actual number of comparator can be reduced from to (there is always one needed to detect the range crossover). Typical folding circuits are the Gilbert multiplier and analog wired-OR circuits.
Application
The very high sample rate of this type of ADC enables high-frequency applications (typically in a few GHz range) like radar detection, wideband radio receivers, electronic test equipment, and optical communication links. Moreover, the flash ADC is often embedded in a large IC containing many digital decoding functions.
Also, a small flash ADC circuit may be present inside a delta-sigma modulation loop.
Flash ADCs are also used in NAND flash memory, where up to 3 bits are stored per cell as 8 voltages level on floating gates.
References
Analog-to-Digital Conversion
Understanding Flash ADCs
"Integrated Analog-to-Digital and Digital-to-Analog Converters", R. van de Plassche, ADCs, Kluwer Academic Publishers, 1994.
"A Precise Four-Quadrant Multiplier with Subnanosecond Response", Barrie Gilbert, IEEE Journal of Solid-State Circuits, Vol. 3, No. 4 (1968), pp. 365–373
Electronic circuits
Analog circuits
de:Analog-Digital-Umsetzer#Flash-Umsetzer (Paralleles Verfahren) | Flash ADC | Engineering | 1,152 |
1,772,133 | https://en.wikipedia.org/wiki/%28B%2C%20N%29%20pair | In mathematics, a (B, N) pair is a structure on groups of Lie type that allows one to give uniform proofs of many results, instead of giving a large number of case-by-case proofs. Roughly speaking, it shows that all such groups are similar to the general linear group over a field. They were introduced by the mathematician Jacques Tits, and are also sometimes known as Tits systems.
Definition
A (B, N) pair is a pair of subgroups B and N of a group G such that the following axioms hold:
G is generated by B and N.
The intersection, T, of B and N is a normal subgroup of N.
The group W = N/T is generated by a set S of elements of order 2 such that
If s is an element of S and w is an element of W then sBw is contained in the union of BswB and BwB.
No element of S normalizes B.
The set S is uniquely determined by B and N and the pair (W,S) is a Coxeter system.
Terminology
BN pairs are closely related to reductive groups and the terminology in both subjects overlaps. The size of S is called the rank. We call
B the (standard) Borel subgroup,
T the (standard) Cartan subgroup, and
W the Weyl group.
A subgroup of G is called
parabolic if it contains a conjugate of B,
standard parabolic if, in fact, it contains B itself, and
a Borel (or minimal parabolic) if it is a conjugate of B.
Examples
Abstract examples of (B, N) pairs arise from certain group actions.
Suppose that G is any doubly transitive permutation group on a set E with more than 2 elements. We let B be the subgroup of G fixing a point x, and we let N be the subgroup fixing or exchanging 2 points x and y. The subgroup T is then the set of elements fixing both x and y, and W has order 2 and its nontrivial element is represented by anything exchanging x and y.
Conversely, if G has a (B, N) pair of rank 1, then the action of G on the cosets of B is doubly transitive. So (B, N) pairs of rank 1 are more or less the same as doubly transitive actions on sets with more than 2 elements.
More concrete examples of (B, N) pairs can be found in reductive groups.
Suppose that G is the general linear group GLn K over a field K. We take B to be the upper triangular matrices, T to be the diagonal matrices, and N to be the monomial matrices, i.e. matrices with exactly one non-zero element in each row and column. There are n − 1 generators, represented by the matrices obtained by swapping two adjacent rows of a diagonal matrix. The Weyl group is the symmetric group on n letters.
More generally, if G is a reductive group over a field K then the group G = G(K) has a (B, N) pair in which
B = P(K), where P is a minimal parabolic subgroup of G, and
N = N(K), where N is the normalizer of a split maximal torus contained in P.
In particular, any finite group of Lie type has the structure of a (B, N) pair.
Over the field of two elements, the Cartan subgroup is trivial in this example.
A semisimple simply-connected algebraic group over a local field has a (B, N) pair where B is an Iwahori subgroup.
Properties
Bruhat decomposition
The Bruhat decomposition states that G = BWB. More precisely, the double cosets B\G/B are represented by a set of lifts of W to N.
Parabolic subgroups
Every parabolic subgroup equals its normalizer in G.
Every standard parabolic is of the form BW(X)B for some subset X of S, where W(X) denotes the Coxeter subgroup generated by X. Moreover, two standard parabolics are conjugate if and only if their sets X are the same. Hence there is a bijection between subsets of S and standard parabolics. More generally, this bijection extends to conjugacy classes of parabolic subgroups.
Tits's simplicity theorem
BN-pairs can be used to prove that many groups of Lie type are simple modulo their centers. More precisely, if G has a BN-pair such that B is a solvable group, the intersection of all conjugates of B is trivial, and the set of generators of W cannot be decomposed into two non-empty commuting sets, then G is simple whenever it is a perfect group. In practice all of these conditions except for G being perfect are easy to check. Checking that G is perfect needs some slightly messy calculations (and in fact there are a few small groups of Lie type which are not perfect). But showing that a group is perfect is usually far easier than showing it is simple.
Citations
References
Section 6.2.6 discusses BN pairs.
Chapitre IV, § 2 is the standard reference for BN pairs.
B
B
B | (B, N) pair | Mathematics | 1,094 |
750,326 | https://en.wikipedia.org/wiki/Compact%20group | In mathematics, a compact (topological) group is a topological group whose topology realizes it as a compact topological space (when an element of the group is operated on, the result is also within the group). Compact groups are a natural generalization of finite groups with the discrete topology and have properties that carry over in significant fashion. Compact groups have a well-understood theory, in relation to group actions and representation theory.
In the following we will assume all groups are Hausdorff spaces.
Compact Lie groups
Lie groups form a class of topological groups, and the compact Lie groups have a particularly well-developed theory. Basic examples of compact Lie groups include
the circle group T and the torus groups Tn,
the orthogonal group O(n), the special orthogonal group SO(n) and its covering spin group Spin(n),
the unitary group U(n) and the special unitary group SU(n),
the compact forms of the exceptional Lie groups: G2, F4, E6, E7, and E8.
The classification theorem of compact Lie groups states that up to finite extensions and finite covers this exhausts the list of examples (which already includes some redundancies). This classification is described in more detail in the next subsection.
Classification
Given any compact Lie group G one can take its identity component G0, which is connected. The quotient group G/G0 is the group of components π0(G) which must be finite since G is compact. We therefore have a finite extension
Meanwhile, for connected compact Lie groups, we have the following result:
Theorem: Every connected compact Lie group is the quotient by a finite central subgroup of a product of a simply connected compact Lie group and a torus.
Thus, the classification of connected compact Lie groups can in principle be reduced to knowledge of the simply connected compact Lie groups together with information about their centers. (For information about the center, see the section below on fundamental group and center.)
Finally, every compact, connected, simply-connected Lie group K is a product of finitely many compact, connected, simply-connected simple Lie groups Ki each of which is isomorphic to exactly one of the following:
The compact symplectic group
The special unitary group
The spin group
or one of the five exceptional groups G2, F4, E6, E7, and E8. The restrictions on n are to avoid special isomorphisms among the various families for small values of n. For each of these groups, the center is known explicitly. The classification is through the associated root system (for a fixed maximal torus), which in turn are classified by their Dynkin diagrams.
The classification of compact, simply connected Lie groups is the same as the classification of complex semisimple Lie algebras. Indeed, if K is a simply connected compact Lie group, then the complexification of the Lie algebra of K is semisimple. Conversely, every complex semisimple Lie algebra has a compact real form isomorphic to the Lie algebra of a compact, simply connected Lie group.
Maximal tori and root systems
A key idea in the study of a connected compact Lie group K is the concept of a maximal torus, that is a subgroup T of K that is isomorphic to a product of several copies of and that is not contained in any larger subgroup of this type. A basic example is the case , in which case we may take to be the group of diagonal elements in . A basic result is the torus theorem which states that every element of belongs to a maximal torus and that all maximal tori are conjugate.
The maximal torus in a compact group plays a role analogous to that of the Cartan subalgebra in a complex semisimple Lie algebra. In particular, once a maximal torus has been chosen, one can define a root system and a Weyl group similar to what one has for semisimple Lie algebras. These structures then play an essential role both in the classification of connected compact groups (described above) and in the representation theory of a fixed such group (described below).
The root systems associated to the simple compact groups appearing in the classification of simply connected compact groups are as follows:
The special unitary groups correspond to the root system
The odd spin groups correspond to the root system
The compact symplectic groups correspond to the root system
The even spin groups correspond to the root system
The exceptional compact Lie groups correspond to the five exceptional root systems G2, F4, E6, E7, or E8
Fundamental group and center
It is important to know whether a connected compact Lie group is simply connected, and if not, to determine its fundamental group. For compact Lie groups, there are two basic approaches to computing the fundamental group. The first approach applies to the classical compact groups , , , and and proceeds by induction on . The second approach uses the root system and applies to all connected compact Lie groups.
It is also important to know the center of a connected compact Lie group. The center of a classical group can easily be computed "by hand," and in most cases consists simply of whatever roots of the identity are in . (The group SO(2) is an exception—the center is the whole group, even though most elements are not roots of the identity.) Thus, for example, the center of consists of nth roots of unity times the identity, a cyclic group of order .
In general, the center can be expressed in terms of the root lattice and the kernel of the exponential map for the maximal torus. The general method shows, for example, that the simply connected compact group corresponding to the exceptional root system has trivial center. Thus, the compact group is one of very few simple compact groups that are simultaneously simply connected and center free. (The others are and .)
Further examples
Amongst groups that are not Lie groups, and so do not carry the structure of a manifold, examples are the additive group Zp of p-adic integers, and constructions from it. In fact any profinite group is a compact group. This means that Galois groups are compact groups, a basic fact for the theory of algebraic extensions in the case of infinite degree.
Pontryagin duality provides a large supply of examples of compact commutative groups. These are in duality with abelian discrete groups.
Haar measure
Compact groups all carry a Haar measure, which will be invariant by both left and right translation (the modulus function must be a continuous homomorphism to positive reals (R+, ×), and so 1). In other words, these groups are unimodular. Haar measure is easily normalized to be a probability measure, analogous to dθ/2π on the circle.
Such a Haar measure is in many cases easy to compute; for example for orthogonal groups it was known to Adolf Hurwitz, and in the Lie group cases can always be given by an invariant differential form. In the profinite case there are many subgroups of finite index, and Haar measure of a coset will be the reciprocal of the index. Therefore, integrals are often computable quite directly, a fact applied constantly in number theory.
If is a compact group and is the associated Haar measure, the Peter–Weyl theorem provides a decomposition of as an orthogonal direct sum of finite-dimensional subspaces of matrix entries for the irreducible representations of .
Representation theory
The representation theory of compact groups (not necessarily Lie groups and not necessarily connected) was founded by the Peter–Weyl theorem. Hermann Weyl went on to give the detailed character theory of the compact connected Lie groups, based on maximal torus theory. The resulting Weyl character formula was one of the influential results of twentieth century mathematics. The combination of the Peter–Weyl theorem and the Weyl character formula led Weyl to a complete classification of the representations of a connected compact Lie group; this theory is described in the next section.
A combination of Weyl's work and Cartan's theorem gives a survey of the whole representation theory of compact groups G. That is, by the Peter–Weyl theorem the irreducible unitary representations ρ of G are into a unitary group (of finite dimension) and the image will be a closed subgroup of the unitary group by compactness. Cartan's theorem states that Im(ρ) must itself be a Lie subgroup in the unitary group. If G is not itself a Lie group, there must be a kernel to ρ. Further one can form an inverse system, for the kernel of ρ smaller and smaller, of finite-dimensional unitary representations, which identifies G as an inverse limit of compact Lie groups. Here the fact that in the limit a faithful representation of G is found is another consequence of the Peter–Weyl theorem.
The unknown part of the representation theory of compact groups is thereby, roughly speaking, thrown back onto the complex representations of finite groups. This theory is rather rich in detail, but is qualitatively well understood.
Representation theory of a connected compact Lie group
Certain simple examples of the representation theory of compact Lie groups can be worked out by hand, such as the representations of the rotation group SO(3), the special unitary group SU(2), and the special unitary group SU(3). We focus here on the general theory. See also the parallel theory of representations of a semisimple Lie algebra.
Throughout this section, we fix a connected compact Lie group K and a maximal torus T in K.
Representation theory of T
Since T is commutative, Schur's lemma tells us that each irreducible representation of T is one-dimensional:
Since, also, T is compact, must actually map into .
To describe these representations concretely, we let be the Lie algebra of T and we write points as
In such coordinates, will have the form
for some linear functional on .
Now, since the exponential map is not injective, not every such linear functional gives rise to a well-defined map of T into . Rather, let denote the kernel of the exponential map:
where is the identity element of T. (We scale the exponential map here by a factor of in order to avoid such factors elsewhere.)
Then for to give a well-defined map , must satisfy
where is the set of integers. A linear functional satisfying this condition is called an analytically integral element. This integrality condition is related to, but not identical to, the notion of integral element in the setting of semisimple Lie algebras.
Suppose, for example, T is just the group of complex numbers of absolute value 1. The Lie algebra is the set of purely imaginary numbers, and the kernel of the (scaled) exponential map is the set of numbers of the form where is an integer. A linear functional takes integer values on all such numbers if and only if it is of the form for some integer . The irreducible representations of T in this case are one-dimensional and of the form
Representation theory of K
We now let denote a finite-dimensional irreducible representation of K (over ). We then consider the restriction of to T. This restriction is not irreducible unless is one-dimensional. Nevertheless, the restriction decomposes as a direct sum of irreducible representations of T. (Note that a given irreducible representation of T may occur more than once.) Now, each irreducible representation of T is described by a linear functional as in the preceding subsection. If a given occurs at least once in the decomposition of the restriction of to T, we call a weight of . The strategy of the representation theory of K is to classify the irreducible representations in terms of their weights.
We now briefly describe the structures needed to formulate the theorem; more details can be found in the article on weights in representation theory. We need the notion of a root system for K (relative to a given maximal torus T). The construction of this root system is very similar to the construction for complex semisimple Lie algebras. Specifically, the weights are the nonzero weights for the adjoint action of T on the complexified Lie algebra of K. The root system R has all the usual properties of a root system, except that the elements of R may not span . We then choose a base for R and we say that an integral element is dominant if for all . Finally, we say that one weight is higher than another if their difference can be expressed as a linear combination of elements of with non-negative coefficients.
The irreducible finite-dimensional representations of K are then classified by a theorem of the highest weight, which is closely related to the analogous theorem classifying representations of a semisimple Lie algebra. The result says that:
every irreducible representation has highest weight,
the highest weight is always a dominant, analytically integral element,
two irreducible representations with the same highest weight are isomorphic, and
every dominant, analytically integral element arises as the highest weight of an irreducible representation.
The theorem of the highest weight for representations of K is then almost the same as for semisimple Lie algebras, with one notable exception: The concept of an integral element is different. The weights of a representation are analytically integral in the sense described in the previous subsection. Every analytically integral element is integral in the Lie algebra sense, but not the other way around. (This phenomenon reflects that, in general, not every representation of the Lie algebra comes from a representation of the group K.) On the other hand, if K is simply connected, the set of possible highest weights in the group sense is the same as the set of possible highest weights in the Lie algebra sense.
The Weyl character formula
If is representation of K, we define the character of to be the function given by
.
This function is easily seen to be a class function, i.e., for all and in K. Thus, is determined by its restriction to T.
The study of characters is an important part of the representation theory of compact groups. One crucial result, which is a corollary of the Peter–Weyl theorem, is that the characters form an orthonormal basis for the set of square-integrable class functions in K. A second key result is the Weyl character formula, which gives an explicit formula for the character—or, rather, the restriction of the character to T—in terms of the highest weight of the representation.
In the closely related representation theory of semisimple Lie algebras, the Weyl character formula is an additional result established after the representations have been classified. In Weyl's analysis of the compact group case, however, the Weyl character formula is actually a crucial part of the classification itself. Specifically, in Weyl's analysis of the representations of K, the hardest part of the theorem—showing that every dominant, analytically integral element is actually the highest weight of some representation—is proved in a totally different way from the usual Lie algebra construction using Verma modules. In Weyl's approach, the construction is based on the Peter–Weyl theorem and an analytic proof of the Weyl character formula. Ultimately, the irreducible representations of K are realized inside the space of continuous functions on K.
The SU(2) case
We now consider the case of the compact group SU(2). The representations are often considered from the Lie algebra point of view, but we here look at them from the group point of view. We take the maximal torus to be the set of matrices of the form
According to the example discussed above in the section on representations of T, the analytically integral elements are labeled by integers, so that the dominant, analytically integral elements are non-negative integers . The general theory then tells us that for each , there is a unique irreducible representation of SU(2) with highest weight .
Much information about the representation corresponding to a given is encoded in its character. Now, the Weyl character formula says, in this case, that the character is given by
We can also write the character as sum of exponentials as follows:
(If we use the formula for the sum of a finite geometric series on the above expression and simplify, we obtain the earlier expression.)
From this last expression and the standard formula for the character in terms of the weights of the representation, we can read off that the weights of the representation are
each with multiplicity one. (The weights are the integers appearing in the exponents of the exponentials and the multiplicities are the coefficients of the exponentials.) Since there are weights, each with multiplicity 1, the dimension of the representation is . Thus, we recover much of the information about the representations that is usually obtained from the Lie algebra computation.
An outline of the proof
We now outline the proof of the theorem of the highest weight, following the original argument of Hermann Weyl. We continue to let be a connected compact Lie group and a fixed maximal torus in . We focus on the most difficult part of the theorem, showing that every dominant, analytically integral element is the highest weight of some (finite-dimensional) irreducible representation.
The tools for the proof are the following:
The torus theorem.
The Weyl integral formula.
The Peter–Weyl theorem for class functions, which states that the characters of the irreducible representations form an orthonormal basis for the space of square integrable class functions on .
With these tools in hand, we proceed with the proof. The first major step in the argument is to prove the Weyl character formula. The formula states that if is an irreducible representation with highest weight , then the character of satisfies:
for all in the Lie algebra of . Here is half the sum of the positive roots. (The notation uses the convention of "real weights"; this convention requires an explicit factor of in the exponent.) Weyl's proof of the character formula is analytic in nature and hinges on the fact that the norm of the character is 1. Specifically, if there were any additional terms in the numerator, the Weyl integral formula would force the norm of the character to be greater than 1.
Next, we let denote the function on the right-hand side of the character formula. We show that even if is not known to be the highest weight of a representation, is a well-defined, Weyl-invariant function on , which therefore extends to a class function on . Then using the Weyl integral formula, one can show that as ranges over the set of dominant, analytically integral elements, the functions form an orthonormal family of class functions. We emphasize that we do not currently know that every such is the highest weight of a representation; nevertheless, the expressions on the right-hand side of the character formula gives a well-defined set of functions , and these functions are orthonormal.
Now comes the conclusion. The set of all —with ranging over the dominant, analytically integral elements—forms an orthonormal set in the space of square integrable class functions. But by the Weyl character formula, the characters of the irreducible representations form a subset of the 's. And by the Peter–Weyl theorem, the characters of the irreducible representations form an orthonormal basis for the space of square integrable class functions. If there were some that is not the highest weight of a representation, then the corresponding would not be the character of a representation. Thus, the characters would be a proper subset of the set of 's. But then we have an impossible situation: an orthonormal basis (the set of characters of the irreducible representations) would be contained in a strictly larger orthonormal set (the set of 's). Thus, every must actually be the highest weight of a representation.
Duality
The topic of recovering a compact group from its representation theory is the subject of the Tannaka–Krein duality, now often recast in terms of Tannakian category theory.
From compact to non-compact groups
The influence of the compact group theory on non-compact groups was formulated by Weyl in his unitarian trick. Inside a general semisimple Lie group there is a maximal compact subgroup, and the representation theory of such groups, developed largely by Harish-Chandra, uses intensively the restriction of a representation to such a subgroup, and also the model of Weyl's character theory.
See also
Peter–Weyl theorem
Maximal torus
Root system
Locally compact group
p-compact group
Protorus
Classifying finite-dimensional representations of Lie algebras
Weights in the representation theory of semisimple Lie algebras
References
Bibliography
Topological groups
Lie groups
Fourier analysis | Compact group | Mathematics | 4,263 |
44,460,166 | https://en.wikipedia.org/wiki/Staggered%20tuning | Staggered tuning is a technique used in the design of multi-stage tuned amplifiers whereby each stage is tuned to a slightly different frequency. In comparison to synchronous tuning (where each stage is tuned identically) it produces a wider bandwidth at the expense of reduced gain. It also produces a sharper transition from the passband to the stopband. Both staggered tuning and synchronous tuning circuits are easier to tune and manufacture than many other filter types.
The function of stagger-tuned circuits can be expressed as a rational function and hence they can be designed to any of the major filter responses such as Butterworth and Chebyshev. The poles of the circuit are easy to manipulate to achieve the desired response because of the amplifier buffering between stages.
Applications include television IF amplifiers (mostly 20th century receivers) and wireless LAN.
Rationale
Staggered tuning improves the bandwidth of a multi-stage tuned amplifier at the expense of the overall gain. Staggered tuning also increases the steepness of passband skirts and hence improves selectivity.
The value of staggered tuning is best explained by first looking at the shortcomings of tuning every stage identically. This method is called synchronous tuning. Each stage of the amplifier will reduce the bandwidth. In an amplifier with multiple identical stages, the of the response after the first stage will become the points of the second stage. Each successive stage will add a further to what was the band edge of the first stage. Thus the bandwidth becomes progressively narrower with each additional stage.
As an example, a four-stage amplifier will have its points at the points of an individual stage. The fractional bandwidth of an LC circuit is given by,
where m is the power ratio of the power at resonance to that at the band edge frequency (equal to 2 for the point and 1.19 for the point) and Q is the quality factor.
The bandwidth is thus reduced by a factor of . In terms of the number of stages . Thus, the four stage synchronously tuned amplifier will have a bandwidth of only 19% of a single stage. Even in a two-stage amplifier the bandwidth is reduced to 41% of the original. Staggered tuning allows the bandwidth to be widened at the expense of overall gain. The overall gain is reduced because when any one stage is at resonance (and thus maximum gain) the others are not, unlike synchronous tuning where all stages are at maximum gain at the same frequency. A two-stage stagger-tuned amplifier will have a gain less than a synchronously tuned amplifier.
Even in a design that is intended to be synchronously tuned, some staggered tuning effect is inevitable because of the practical impossibility of keeping all tuned circuits perfectly in step and because of feedback effects. This can be a problem in very narrow band applications where essentially only one spot frequency is of interest, such as a local oscillator feed or a wave trap. The overall gain of a synchronously tuned amplifier will always be less than the theoretical maximum because of this.
Both synchronously tuned and stagger-tuned schemes have a number of advantages over schemes that place all the tuning components in a single aggregated filter circuit separate from the amplifier such as ladder networks or coupled resonators. One advantage is that they are easy to tune. Each resonator is buffered from the others by the amplifier stages so have little effect on each other. The resonators in aggregated circuits, on the other hand, will all interact with each other, particularly their nearest neighbours. Another advantage is that the components need not be close to ideal. Every LC resonator is directly working into a resistor which lowers the Q anyway so any losses in the L and C components can be absorbed into this resistor in the design. Aggregated designs usually require high Q resonators. Also, stagger-tuned circuits have resonator components with values that are quite close to each other and in synchronously tuned circuits they can be identical. The spread of component values is thus less in stagger-tuned circuits than in aggregated circuits.
Design
Tuned amplifiers such as the one illustrated at the beginning of this article can be more generically depicted as a chain of transconductance amplifiers each loaded with a tuned circuit.
where for each stage (omitting the suffixes)
gm is the amplifier transconductance
C is the tuned circuit capacitance
L is the tuned circuit inductance
G is the sum of the amplifier output conductance and the input conductance of the next amplifier.
Stage gain
The gain A(s), of one stage of this amplifier is given by;
where s is the complex frequency operator.
This can be written in a more generic form, that is, not assuming that the resonators are the LC type, with the following substitutions,
(the resonant frequency)
(the gain at resonance)
(the stage quality factor)
Resulting in,
Stage bandwidth
The gain expression can be given as a function of (angular) frequency by making the substitution where i is the imaginary unit and ω is the angular frequency
The frequency at the band edges, ωc, can be found from this expression by equating the value of the gain at the band edge to the magnitude of the expression,
where m is defined as above and equal to two if the points are desired.
Solving this for ωc and taking the difference between the two positive solutions finds the bandwidth Δω,
and the fractional bandwidth B,
Overall response
The overall response of the amplifier is given by the product of the individual stages,
It is desirable to be able to design the filter from a standard low-pass prototype filter of the required specification. Frequently, a smooth Butterworth response will be chosen but other polynomial functions can be used that allow ripple in the response. A popular choice for a polynomial with ripple is the Chebyshev response for its steep skirt. For the purpose of transformation, the stage gain expression can be rewritten in the more suggestive form,
This can be transformed into a low-pass prototype filter with the transform
where ω'''c is the cutoff frequency of the low-pass prototype.
This can be done straightforwardly for the complete filter in the case of synchronously tuned amplifiers where every stage has the same ω0 but for a stagger-tuned amplifier there is no simple analytical solution to the transform. Stagger-tuned designs can be approached instead by calculating the poles of a low-pass prototype of the desired form (e.g. Butterworth) and then transforming those poles to a band-pass response. The poles so calculated can then be used to define the tuned circuits of the individual stages.
Poles
The stage gain can be rewritten in terms of the poles by factorising the denominator;
where p, p* are a complex conjugate pair of poles
and the overall response is,
where the ak = A0kω0k/Q0k
From the band-pass to low-pass transform given above, an expression can be found for the poles in terms of the poles of the low-pass prototype, qk,
where ω0B is the desired band-pass centre frequency and Qeff is the effective Q of the overall circuit.
Each pole in the prototype transforms to a complex conjugate pair of poles in the band-pass and corresponds to one stage of the amplifier. This expression is greatly simplified if the cutoff frequency of the prototype, ω'c, is set to the final filter bandwidth ω0B/Qeff.
In the case of a narrowband design which can be used to make a further simplification with the approximation,
These poles can be inserted into the stage gain expression in terms of poles. By comparing with the stage gain expression in terms of component values, those component values can then be calculated.
Applications
Staggered tuning is of most benefit in wideband applications. It was formerly commonly used in television receiver IF amplifiers. However, SAW filters are more likely to be used in that role nowadays. Staggered tuning has advantages in VLSI for radio applications such as wireless LAN. The low spread of component values make it much easier to implement in integrated circuits than traditional ladder networks.
See also
Double-tuned amplifier
References
Bibliography
Chattopadhyay, D., Electronics: Fundamentals and Applications, New Age International, 2006 .
Gulati, R. R., Modern Television Practice Principles, Technology and Servicing, New Age International, 2002 .
Iniewski, Krzysztof, CMOS Nanoelectronics: Analog and RF VLSI Circuits, McGraw Hill Professional, 2011 .
Maheswari, L. K.; Anand, M. M. S., Analog Electronics, PHI Learning, 2009 .
Moxon, L. A., Recent Advances in Radio Receivers, Cambridge University Press, 1949 .
Pederson, Donald O.; Mayaram, Kartikeya, Analog Integrated Circuits for Communication, Springer, 2007 .
Sedha, R. S., A Textbook of Electronic Circuits, S. Chand, 2008 .
Wiser, Robert, Tunable Bandpass RF Filters for CMOS Wireless Transmitters'', ProQuest, 2008 .
Electronic amplifiers
Signal processing filter | Staggered tuning | Chemistry,Technology | 1,888 |
51,331 | https://en.wikipedia.org/wiki/Dimensionless%20quantity | Dimensionless quantities, or quantities of dimension one, are quantities implicitly defined in a manner that prevents their aggregation into units of measurement. Typically expressed as ratios that align with another system, these quantities do not necessitate explicitly defined units. For instance, alcohol by volume (ABV) represents a volumetric ratio; its value remains independent of the specific units of volume used, such as in milliliters per milliliter (mL/mL).
The number one is recognized as a dimensionless base quantity. Radians serve as dimensionless units for angular measurements, derived from the universal ratio of 2π times the radius of a circle being equal to its circumference.
Dimensionless quantities play a crucial role serving as parameters in differential equations in various technical disciplines. In calculus, concepts like the unitless ratios in limits or derivatives often involve dimensionless quantities. In differential geometry, the use of dimensionless parameters is evident in geometric relationships and transformations. Physics relies on dimensionless numbers like the Reynolds number in fluid dynamics, the fine-structure constant in quantum mechanics, and the Lorentz factor in relativity. In chemistry, state properties and ratios such as mole fractions concentration ratios are dimensionless.
History
Quantities having dimension one, dimensionless quantities, regularly occur in sciences, and are formally treated within the field of dimensional analysis. In the 19th century, French mathematician Joseph Fourier and Scottish physicist James Clerk Maxwell led significant developments in the modern concepts of dimension and unit. Later work by British physicists Osborne Reynolds and Lord Rayleigh contributed to the understanding of dimensionless numbers in physics. Building on Rayleigh's method of dimensional analysis, Edgar Buckingham proved the theorem (independently of French mathematician Joseph Bertrand's previous work) to formalize the nature of these quantities.
Numerous dimensionless numbers, mostly ratios, were coined in the early 1900s, particularly in the areas of fluid mechanics and heat transfer. Measuring logarithm of ratios as levels in the (derived) unit decibel (dB) finds widespread use nowadays.
There have been periodic proposals to "patch" the SI system to reduce confusion regarding physical dimensions. For example, a 2017 op-ed in Nature argued for formalizing the radian as a physical unit. The idea was rebutted on the grounds that such a change would raise inconsistencies for both established dimensionless groups, like the Strouhal number, and for mathematically distinct entities that happen to have the same units, like torque (a vector product) versus energy (a scalar product). In another instance in the early 2000s, the International Committee for Weights and Measures discussed naming the unit of 1 as the "uno", but the idea of just introducing a new SI name for 1 was dropped.
Buckingham theorem
The Buckingham theorem indicates that validity of the laws of physics does not depend on a specific unit system. A statement of this theorem is that any physical law can be expressed as an identity involving only dimensionless combinations (ratios or products) of the variables linked by the law (e. g., pressure and volume are linked by Boyle's Law – they are inversely proportional). If the dimensionless combinations' values changed with the systems of units, then the equation would not be an identity, and Buckingham's theorem would not hold.
Another consequence of the theorem is that the functional dependence between a certain number (say, n) of variables can be reduced by the number (say, k) of independent dimensions occurring in those variables to give a set of p = n − k independent, dimensionless quantities. For the purposes of the experimenter, different systems that share the same description by dimensionless quantity are equivalent.
Integers
Integer numbers may represent dimensionless quantities. They can represent discrete quantities, which can also be dimensionless.
More specifically, counting numbers can be used to express countable quantities.
The concept is formalized as quantity number of entities (symbol N) in ISO 80000-1.
Examples include number of particles and population size. In mathematics, the "number of elements" in a set is termed cardinality. Countable nouns is a related linguistics concept.
Counting numbers, such as number of bits, can be compounded with units of frequency (inverse second) to derive units of count rate, such as bits per second.
Count data is a related concept in statistics.
The concept may be generalized by allowing non-integer numbers to account for fractions of a full item, e.g., number of turns equal to one half.
Ratios, proportions, and angles
Dimensionless quantities can be obtained as ratios of quantities that are not dimensionless, but whose dimensions cancel out in the mathematical operation. Examples of quotients of dimension one include calculating slopes or some unit conversion factors. Another set of examples is mass fractions or mole fractions, often written using parts-per notation such as ppm (= 10−6), ppb (= 10−9), and ppt (= 10−12), or perhaps confusingly as ratios of two identical units (kg/kg or mol/mol). For example, alcohol by volume, which characterizes the concentration of ethanol in an alcoholic beverage, could be written as .
Other common proportions are percentages % (= 0.01), ‰ (= 0.001). Some angle units such as turn, radian, and steradian are defined as ratios of quantities of the same kind. In statistics the coefficient of variation is the ratio of the standard deviation to the mean and is used to measure the dispersion in the data.
It has been argued that quantities defined as ratios having equal dimensions in numerator and denominator are actually only unitless quantities and still have physical dimension defined as .
For example, moisture content may be defined as a ratio of volumes (volumetric moisture, m3⋅m−3, dimension L⋅L) or as a ratio of masses (gravimetric moisture, units kg⋅kg−1, dimension M⋅M); both would be unitless quantities, but of different dimension.
Dimensionless physical constants
Certain universal dimensioned physical constants, such as the speed of light in vacuum, the universal gravitational constant, the Planck constant, the Coulomb constant, and the Boltzmann constant can be normalized to 1 if appropriate units for time, length, mass, charge, and temperature are chosen. The resulting system of units is known as the natural units, specifically regarding these five constants, Planck units. However, not all physical constants can be normalized in this fashion. For example, the values of the following constants are independent of the system of units, cannot be defined, and can only be determined experimentally:
engineering strain, a measure of physical deformation defined as a change in length divided by the initial length.
fine-structure constant, α ≈ 1/137 which characterizes the magnitude of the electromagnetic interaction between electrons.
β (or μ) ≈ 1836, the proton-to-electron mass ratio. This ratio is the rest mass of the proton divided by that of the electron. An analogous ratio can be defined for any elementary particle.
Strong force coupling strength αs ≈ 1.
The tensor-to-scalar ratio , a ratio between the contributions of tensor and scalar modes to the primordial power spectrum observed in the CMB.
The Immirzi-Barbero parameter , which characterizes the area gap in loop quantum gravity.
emissivity, which is the ratio of actual emitted radiation from a surface to that of an idealized surface at the same temperature
List
Physics and engineering
Lorentz factor – parameter used in the context of special relativity for time dilation, length contraction, and relativistic effects between observers moving at different velocities
Fresnel number – wavenumber (spatial frequency) over distance
Beta (plasma physics) – ratio of plasma pressure to magnetic pressure, used in magnetospheric physics as well as fusion plasma physics.
Thiele modulus – describes the relationship between diffusion and reaction rate in porous catalyst pellets with no mass transfer limitations.
Numerical aperture – characterizes the range of angles over which the system can accept or emit light.
Zukoski number, usually noted , is the ratio of the heat release rate of a fire to the enthalpy of the gas flow rate circulating through the fire. Accidental and natural fires usually have a . Flat spread fires such as forest fires have . Fires originating from pressured vessels or pipes, with additional momentum caused by pressure, have .
Fluid mechanics
Chemistry
Relative density – density relative to water
Relative atomic mass, Standard atomic weight
Equilibrium constant (which is sometimes dimensionless)
Other fields
Cost of transport is the efficiency in moving from one place to another
Elasticity is the measurement of the proportional change of an economic variable in response to a change in another
Basic reproduction number is a dimensionless ratio used in epidemiology to quantify the transmissibility of an infection.
See also
List of dimensionless quantities
Arbitrary unit
Dimensional analysis
Normalization (statistics) and standardized moment, the analogous concepts in statistics
Orders of magnitude (numbers)
Similitude (model)
References
Further reading
(15 pages)
External links | Dimensionless quantity | Physics,Mathematics | 1,869 |
29,736,990 | https://en.wikipedia.org/wiki/Criticism%20of%20Amazon | Amazon has been criticized on many issues, including anti-competitive business practices, its treatment of workers, offering counterfeit or plagiarized products, objectionable content of its books, and its tax and subsidy deals with governments.
Anti-competitive practices
One-click patent
The company has been criticized for its alleged use of patents as a competitive hindrance; its "1-Click patent" may be the best-known example. Amazon's use of the 1-click patent against competitor Barnes & Noble's website led the Free Software Foundation to announce a boycott of Amazon in December 1999, which ended in September 2002. On February 22, 2000, the company patented an Internet-based customer referral system known as an affiliate program. Industry leaders Tim O'Reilly and Charlie Jackson spoke out against the patents and O'Reilly published an open letter to Amazon CEO Jeff Bezos, petitioning Bezos to "avoid any attempts to limit the further development of Internet commerce". O'Reilly collected 10,000 signatures, and Bezos responded with an open letter. The protest ended with O'Reilly and Bezos visiting Washington, D.C. to lobby for patent reform. The company received a patent, "Method and system for conducting a discussion relating to an item on Internet discussion boards", on February 25, 2003. On May 12, 2006, the USPTO ordered a re-examination of the 1-Click patent based on a request by actor Peter Calveley, who cited an earlier e-commerce patent and the Digicash electronic cash system.
Canadian site
Amazon has a Canadian website in English and French. Until a March 2010 ruling, however, it was prevented from operating any headquarters, servers, fulfillment centers or call centers in Canada by that country's legal restrictions on foreign-owned booksellers. Amazon's Canadian site originates in the United States, and Amazon has an agreement with Canada Post to handle distribution in Canada and for the use of the crown corporation's Mississauga, Ontario, shipping facility. The launch of Amazon.ca generated controversy in Canada. In 2002, the Canadian Booksellers Association and Indigo Books and Music sought a court ruling that Amazon's partnership with Canada Post represented an attempt to circumvent Canadian law. The litigation was dropped in 2004.
In January 2017, doormats with the Indian flag were offered on the Amazon Canada website. Use of the Indian flag in this way is considered offensive to the Indian community and a violation of the Flag Code of India. Indian Minister of External Affairs Sushma Swaraj threatened a visa embargo for Amazon officials if Amazon did not issue an unconditional apology and withdraw all such products. According to deputy commissioner for deceptive marketing practices Josephine Palumbo, Amazon.ca was required by the Canadian Competition Bureau to pay a $1 million penalty and $100,000 in costs for failing to provide "truth in advertising". The fine was levied because some products on Amazon.ca had an artificially-high list price, making a lower selling price appear attractive and giving the company an unfair competitive edge over other retailers. This is a frequent practice among some retailers, and the fine was intended to "send a clear message [to the industry] that unsubstantiated savings claims will not be tolerated". The bureau indicated that Amazon has made changes to ensure that its regular prices are more accurate.
BookSurge
Sales representatives of Amazon's BookSurge division began contacting publishers of print on demand (POD) titles in March 2008 to inform them that for Amazon to continue selling their POD books, they must sign agreements with Amazon's BookSurge POD company. Publishers were told that eventually, the only POD titles Amazon would sell would be those printed by BookSurge. Some publishers felt that this ultimatum was monopolistic, and questioned the ethics of the move and its legality under anti-trust law.
Direct selling
In 2008, Amazon UK was criticized for attempting to prevent publishers from direct selling at a discount from their own websites. Amazon argued that it should be able to pay publishers based on the lower prices on their websites, rather than on the recommended retail price (RRP). Amazon UK was also criticized that year by the British publishing community after withdrawing from sale key titles published by Hachette Livre UK, possibly to pressure Hachette to provide discounts described as unreasonable. Curtis Brown managing director Jonathan Lloyd said that "publishers, authors, and agents are 100% behind [Hachette]. Someone has to draw a line in the sand. Publishers have given 1% a year away to retailers, so where does it stop? Using authors as a financial football is disgraceful." In August 2013, Amazon agreed to end its price-parity policy for marketplace sellers in the European Union in response to investigations by the UK Office of Fair Trade and Germany's Federal Cartel Office.
Price control
After the announcement of the Apple iPad on January 27, 2010, Macmillan Publishers began a pricing dispute with Amazon about electronic publications. Macmillan asked Amazon to accept a new pricing scheme it had worked out with Apple, raising the price of e-books from $9.99 to $15. Amazon responded by pulling all Macmillan books (electronic and physical) from its website, although affiliates selling the books were still listed. On January 31, 2010, Amazon "capitulated" to Macmillan's pricing request.
In 2014, Amazon and Hachette became involved in a dispute about agency pricing, when an agent (such as Hachette) determines the price of a book; normally, Amazon dictates the discount level of a book. High-profile authors became involved; hundreds of writers, including Stephen King and John Grisham, signed a petition: "We encourage Amazon in the strongest possible terms to stop harming the livelihood of the authors on whom it has built its business. None of us, neither readers nor authors, benefit when books are taken hostage." Author Ursula K. Le Guin said about Amazon's practice of making Hachette books more difficult to buy on its site, "We're talking about censorship: deliberately making a book hard or impossible to get, 'disappearing' an author." Falling sales of Hachette books on Amazon indicated that its policies probably deterred customers. On August 11, 2014, Amazon removed the option to pre-order Captain America: The Winter Soldier to control the online pricing of Disney films; the company had used similar tactics with Warner Bros. The conflict was resolved in late 2014, with neither side making concessions. Amazon again began to block pre-orders of Disney films in February 2017, just before Moana and Rogue One were due to be released for the home market.
The law firm Hagens Berman filed a lawsuit in the New York district court in January 2021, saying that Amazon colluded with leading publishers to keep e-book prices artificially high. Connecticut announced that it was investigating Amazon for potential anti-competitive behavior in its marketing of e-books.
Removal of competitors' products
On October 1, 2015, Amazon announced that Apple TV and Google Chromecast products were banned from sale by all merchants effective October 29 of that year. The company said that this was to prevent "customer confusion", since those devices did not support Amazon Prime Video. The move was criticized as an attempt to suppress products competing with Amazon Fire TV products.
In May 2017, it was reported that Apple and Amazon were nearing an agreement to offer Prime Video on Apple TV and allow the product to return to the retailer. Prime Video launched on Apple TV on December 6 of that year, with Amazon beginning to sell Apple TVs again shortly thereafter.
Amazon is known to remove products for trivial policy violations by third-party sellers which compete with Amazon's home-grown brands. To compete for product placement where Amazon's own brands are featured prominently, third-party sellers often list themselves with Amazon's Prime program; this increases costs, shrinking profit margins.
Amazon has suppressed other Google products, including Google Home (which competes with Amazon Echo), Pixel phones, and products from Google subsidiary Nest Labs (despite the Nest Learning Thermostat's integration support for Amazon Alexa). Google announced on December 6, 2017, that it would block YouTube from the Amazon Echo Show and Amazon Fire TV products. In December 2017, Amazon said that it intended to begin offering Chromecast again. Nest said that it would no longer offer stock to Amazon until the company committed to offering its entire product line.
In April 2019, Amazon announced that it would add Chromecast support to its Prime Video mobile app and release its Android TV app more widely; Google announced that it would, in return, restore access to YouTube on Fire TV (but not the Echo Show). Prime Video for Chromecast and YouTube for Fire TV were both released July 9, 2019. In December 2019, after the acquisition of Honey (a browser extension which applies online coupons to online stores) by PayPal, Amazon began to warn users that Honey was a security risk.
Apple partnership
In November 2018, Amazon reached an agreement with Apple Inc. to sell selected products through the company, selected Apple authorized resellers, and vendors who meet specific criteria. As a result of this partnership, only Apple authorized resellers and vendors who purchase $2.5 million in refurbished stock from Apple every 90 days (via the Amazon Renewed program) may sell Apple products on Amazon. The partnership was criticized by independent resellers, who believe that it restricts their ability to sell refurbished Apple products on Amazon at low cost. In August 2019, The Verge reported that Amazon was being investigated by the FTC because of the deal.
Marketplace participant and owner
Amazon owns a dominant marketplace and is a retail seller in that marketplace. The company uses data from the marketplace which is unavailable to other retailers in that marketplace to determine which products to produce in-house and at what price point. Amazon markets products under AmazonBasics, Lark & Ro, and other private-label brands. U.S. presidential candidate Elizabeth Warren proposed forcing Amazon to sell AmazonBasics and Whole Foods Market, where Amazon competes against other sellers as a brick-and-mortar retailer.
Tim O'Reilly, comparing Ingram's business with Amazon's, noted that Amazon's focus on the customer debilitates the retail ecosystem (which includes sellers, manufacturers, and its own employees); Ingram sought to innovate and build on behalf of all the stakeholders in its marketplace it operates in. According to O'Reilly, Amazon's behavior is driven by its need for growth. Third-party sellers have criticized Amazon's rent-seeking behavior, which includes increasing the cost of doing business on its platform, abusing its dominant market position to manipulate pricing, copying popular products from third-party retailers, and unjustifiably promoting its own brands.
In October 2021, citing leaked internal documents, Reuters reported that Amazon harvested and studied data about its sellers' sales and used the data to identify lucrative markets and launch Amazon replacement products in India. The data included information about returns, clothing sizes, and the number of product views on its website. Rival sales figures are not available to Amazon's sellers. The company also tweaked search results to favor Amazon's private-label products. The strategy's impact reached well beyond India; hundreds of Solimo-branded household items are available in the US. One casualty is the clothing brand John Miller, owned by India's Kishore Biyani. In October 2022, a £900 million class-action lawsuit was filed in the United Kingdom against Amazon over a buy box on its website which "favours products sold by Amazon itself, or by retailers who pay Amazon for handling their logistics".
Antitrust complaints
The European Commission began an investigation in June 2015 of clauses in Amazon's e-book distribution agreements, which may have breached EU antitrust law by making it harder for other e-book platforms to compete. The investigation ended in May 2017, when the commission rendered binding Amazon's commitments not to use or enforce the clauses.
In July 2019 and November 2020, the European Commission began in-depth investigations of Amazon's use of marketplace seller data and possible preferential treatment of Amazon's retail offers and those of marketplace sellers which use Amazon's logistics and delivery services. It was charged that Amazon relied on nonpublic data from third-party sellers to benefit its retail business, violating competition law in the European Economic Area. On June 11, 2020, the European Union announced that it would prosecute Amazon for its treatment of third-party e-commerce sellers; California began an investigation around the same time. In December 2019, the Competition Commission of India suspended an approval for the takeover of Future Retail and levied a 200 crore. The commission learned from internal Amazon emails that it intended to acquire the company solely to take advantage of foreign-investment relaxation. Amazon appealed the suspension; the CCI defended it in March 2022, citing misrepresentation on Amazon's part.
In July 2020, Amazon, Apple, Google and Meta were accused of using excessive power and anti-competitive strategies to quash potential competitors. Their CEOs appeared in a July 29 teleconference before the U.S. House Antitrust Subcommittee. In October 2020, the subcommittee released a report accusing Amazon of holding a monopoly e-commerce position to unfairly compete with sellers on its platform. In a March 2022 letter to bipartisan leaders of the Senate Judiciary Committee, the Justice Department endorsed legislation forbidding large digital platforms from disadvantaging competitors' products and services: "The [Justice] Department views the rise of dominant platforms as presenting a threat to open markets and competition, with risks for consumers, businesses, innovation, resiliency, global competitiveness, and our democracy". The Attorney General of California sued Amazon in September 2022 after the state's investigation which began in 2020, alleging that its contracts with third-party sellers and wholesalers inflated prices and stifled competition; merchants are coerced into contracts which prevent them from offering their products elsewhere, on other websites, for lower prices.
Stagnation of subsidiaries
Amazon's buying up of subsidiaries has reportedly led to stagnation and a lack of development or innovation in them, particularly Goodreads; an Input Magazine article called the platform "ancient and terrible", saying that it resembles an early-2000s digital library with no developments to accommodate the evolution of book-metadata acquisition or online reader activity. New Statesman also criticized Goodreads, calling it "stagnated" and a "monopoly on the discussion of new books": "[W]hat should be a cozy, pleasant corner of the internet has become a monster."
Effects on small businesses
Due to its size and economies of scale, Amazon can undercut small local shopkeepers. Stacy Mitchell and Olivia Lavecchia, researchers with the Institute for Local Self-Reliance, say that this has caused many local, small-scale shopkeepers to close in a number of cities and towns in the United States.
Products and services
Fraudulent book listings
Jane Friedman discovered six listings of books fraudulently using her name on Amazon and Goodreads; the companies resisted removing the fraudulent titles until the author's complaints went viral on social media in a blog post, "I Would Rather See My Books Get Pirated Than This (Or: Why Goodreads and Amazon Are Becoming Dumpster Fires)."
Animal cruelty
Amazon had carried two cockfighting magazines and two dog-fighting videos. The Humane Society of the United States (HSUS), saying that their sale violated federal law, sued the company. An August 2007 campaign to boycott Amazon received attention in the wake of a dog-fighting case involving NFL quarterback Michael Vick. Marburger Publishing agreed to settle with the Humane Society in May 2008 by asking Amazon to stop selling its magazine, The Game Cock; The Feathered Warrior, the second magazine named in the lawsuit, remained available.
Mercy for Animals has said that Amazon permits sales of foie gras, which has been banned in California and several countries, on its website. As a result, animal-welfare groups began a movement known as "Amazon Cruelty".
Items prohibited by UK law
In December 2015, The Guardian published an exposé of Amazon sales which violated British law. Items included a pepper-spray gun (sold by amazon.co.uk), acid, stun guns and a concealed cutting weapon (sold by Amazon Marketplace vendors); all are considered prohibited weapons in the UK. The Guardian also released a video describing some of the weapons. Likewise, brass catchers, illegal in New South Wales, are sold by Amazon.com.au.
Antisemitic content
A January 2008 article in the Czech weekly Tyden called attention to shirts sold by Amazon which were emblazoned with "I Love Heinrich Himmler" and "I Love Reinhard Heydrich". Amazon spokesperson Patricia Smith told Tyden, "Our catalog contains millions of items. With such a large number, unexpected merchandise may get onto the Web." Smith also told Tyden that the company did not intend to stop working with Direct Collection, producer of the T-shirts. After pressure from the World Jewish Congress (WJC), Amazon announced that it had removed from its website the Himmler and Heydrich T-shirts and "I Love Hitler" T-shirts sold for women and children. After the WJC intervention, other items (including a Hitler Youth Knife emblazoned with the Nazi slogan "Blood and Honor" and a 1933 German SS Officer Dagger distributed by Knife-Kingdom) were also removed from Amazon.com.
An October 2013 report in the British online magazine The Kernel said that Amazon.com was selling books defending Holocaust denial, shipping them to customers in countries where Holocaust denial is prohibited by law. That month, the WJC called on Amazon CEO Jeff Bezos to remove books denying the Holocaust and promoting antisemitism, white supremacy, racism or sexism. "No one should profit from the sale of such vile and offensive hate literature. Many Holocaust survivors are deeply offended by the fact that the world's largest online retailer is making money from selling such material," WJC executive vice-president Robert Singer wrote in a letter to Bezos.
Although Nazi paraphernalia was still listed on Amazon in the US and Canada in 2016, the WJC announced on March 9, 2017, that Amazon had complied with it and other Jewish organizations by removing from sale the cited Holocaust-denial works. The WJC offered assistance in identifying Holocaust-denial works among Amazon's offerings in the future.
The Central Council of Jews in Germany denounced Amazon in July 2019 for continuing to sell items glorifying the Nazis. The company was caught in December of that year selling Auschwitz-themed Christmas-tree ornaments on its platform, printed on demand with stock images of the concentration camp from a third-party seller; Amazon eventually removed the ornaments from all its platforms. Auschwitz Memorial, which maintains the concentration camp for historical and educational purposes, said that it had found a "disturbing online product from another seller – a computer mousepad bearing the image of a freight train used for deporting people to the concentration camps." Wired journalist Louise Matsakis called the Holocaust-themed products "the byproduct of an increasingly automated e-commerce landscape", noting that the items were print-on-demand and Amazon became aware of them after offended customers reported their sale.
Amazon removed all new and used print and digital copies of The Turner Diaries (an antisemitic and racist dystopian novel) in late 2020 from its bookselling platform, including its AbeBooks and Book Depository subsidiaries, effectively removing it from the digital bookselling market. The company cited the book's connection with the QAnon movement as the reason, and had already purged a number of self-published and small-press titles connected with QAnon from its platform. Amazon subsidiary Goodreads purged the metadata from all editions of The Turner Diaries, replacing the author and title fields with "NOT A BOOK" (capitalization intended), a designation normally used by the platform to weed non-book items with ISBN numbers, as well as plagiarized titles, from its catalogue.
Amazon began offering access through its Prime streaming service in 2022 to the documentary film, Hebrews to Negroes: Wake Up Black America, which had been endorsed by Kyrie Irving. The film contains a number of conspiracy theories, including Holocaust denial and the theory that European Jews were responsible for the Atlantic slave trade. Variety defended Amazon: "The radio silence [of Amazon] shouldn't be misinterpreted as indifference. To the contrary, insiders say how to properly handle "Hebrews" [the film] has been the subject of endless debates at numerous meetings, some of which have involved the top brass at Amazon ... [W]hile the company has a long and arguably inconsistent track record when it comes to policing controversial content on its own platform, "Hebrews" has been particularly challenging given how high-profile the Irving saga became. Few execs from the company’s headquarters in Seattle or its studio business in Culver City have been spared an earful from those wondering why the company is selling such vile material on its website." CEO Andy Jassy said that the film had to remain on Amazon even if the viewpoint was objectionable. Stephen A. Smith criticized former Amazon CEO Jeff Bezos for the decision: "Jeff Bezos, you’re supposed to be a better man than that. Get rid of that. Get that off your platform, please, since all of this noise is being made."
Pedophile guide
On November 10, 2010, a controversy arose about the marketing by Amazon of an e-book by Phillip R. Greaves entitled The Pedophile's Guide to Love and Pleasure: A Child-lover's Code of Conduct. Readers threatened to boycott Amazon for selling the book, which was described by critics as a "pedophile guide". Amazon initially defended its action, saying that it "believes it is censorship not to sell certain books simply because we or others believe their message is objectionable" and "supported the right of every individual to make their own purchasing decisions". The company later removed the book. According to the San Francisco Chronicle, Amazon "defended the book, then removed it, then reinstated it, and then removed it ".
American Booksellers for Free Expression president Christopher Finan said that Amazon had the right to sell the book; it is not child pornography or legally obscene, since it does not have pictures. Enough Is Enough (a child-safety organization), however, said that the book should be removed and "lends the impression that child abuse is normal". People for the Ethical Treatment of Animals (PETA), citing the removal of The Pedophile's Guide from Amazon, urged the website to also remove books on dog-fighting from its catalogue.
Greaves was arrested on December 20, 2010, at his Pueblo, Colorado home on a felony warrant issued by the Polk County Sheriff's Office in Lakeland, Florida. Detectives from the county's Internet Crimes Division ordered a signed copy of Greaves' book and had it shipped to the agency's jurisdiction, where it violated state obscenity laws. According to Sheriff Grady Judd, Greaves violated local laws prohibiting the distribution of "obscene material depicting minors engaged in harmful conduct" (a third-degree felony). Greaves pleaded no contest to the charges and was released on probation, with his previous jail time counting as time served.
Counterfeit products
On October 16, 2016, Apple filed a trademark-infringement case against Mobile Star LLC for selling counterfeit Apple products to Amazon. In the suit, Apple provided evidence that Amazon was selling counterfeit Apple products and advertising them as genuine. Apple had a 90-percent success rate in identifying counterfeit products, which Amazon sold without determining if they were genuine. Mobile Star LLC settled with Apple for an undisclosed amount on April 27, 2017.
The sale of counterfeit products by Amazon has attracted widespread notice, with purchases marked as fulfilled by third parties and those shipped directly from Amazon warehouses found to be counterfeit. This has included products sold directly by Amazon, marked as "ships from and sold by Amazon.com". Counterfeit charging cables sold on Amazon as purported Apple products have been found to be a fire hazard. Selling Apple products is now a restricted category on Amazon, meaning resellers have to get approval from the brand to sell those products on the site.
Counterfeits have included a variety of products, from big-ticket items to tweezers, gloves, and umbrellas. More recently, this has spread to Amazon's newer grocery services. Counterfeiting was reportedly a problem for artists and small businesses, whose products were rapidly copied for sale on the site. Companies such as Birkenstock and Nike have pulled their products from Amazon.
Seller accounts on Amazon are set by default to use "commingled inventory", which encourages counterfeiting. The goods a seller sends to Amazon are mixed with those of the producer of the product and those of all other sellers supplying what is supposed to be the same product.
In 2023, Amazon said it spent more than $1.2 billion and employed more than 15,000 people that were dedicated to protecting customers from counterfeit, fraud, and other abuse. Between 2020 and 2023, the Amazon Counterfeit Crimes Unit pursued more than 21,000 bad actors through litigation and criminal referrals to law enforcement. The company posts updated numbers in it’s annual Brand Protection Report.
In June 2019, BuzzFeed reported that some products identified on the site as "Amazon's choice" were low quality and had a history of customer complaints and evidence of product-review manipulation. The Wall Street Journal reported in August 2019 that it had found more than 4,000 items for sale on Amazon's site that had been declared unsafe by federal agencies, had misleading labels, or had been banned by federal regulators. In the wake of the WSJ investigation, three U.S. senatorsRichard Blumenthal, Ed Markey, and Bob Menendezsent an open letter to Bezos demanding action against the sale of unsafe items on the site: "Unquestionably, Amazon is falling short of its commitment to keeping safe those consumers who use its massive platform." The letter questioned the company's practices and gave Bezos a September 29, 2019, deadline to respond: "We call on you to immediately remove from the platform all the problematic products examined in the recent WSJ report; explain how you are going about this process; conduct a sweeping internal investigation of your enforcement and consumer safety policies; and institute changes that will continue to keep unsafe products off your platform." Earlier that month, Blumenthal and Menendez had sent Bezos a letter about the BuzzFeed report. In December 2019, The Wall Street Journal reported that people were retrieving trash from dumpsters and selling it on Amazon as new. The reporters learned that it was easy for a seller to set up an account and sell cleaned-up junk as new. In addition to trash, sellers were obtaining inventory from clearance bins, thrift stores, and pawn shops.
In August 2020, an appeals court in California ruled that Amazon could be held liable for unsafe products sold on its website. A Californian bought a replacement laptop battery which caught fire, giving her third-degree burns.
Media
American copyright lobbyists have accused Amazon of facilitating the sale of unlicensed CDs and DVDs, particularly in the Chinese market. The Chinese government responded by announcing plans to increase regulation of Amazon, Apple and Taobao in relation to Internet copyright infringement. Amazon has shut down third-party distributors due to pressure from the National Copyright Administration of China (NCAC).
Amazon has been caught selling counterfeit books, which mimic an authentic edition of a published work but are not authorized for publication by the copyright holder; one example is The Sanford Guide to Antimicrobial Therapy, a non-fiction medical book. According to David Streitfeld of The New York Times, "Amazon takes a hands-off approach to what goes on in its bookstore, never checking the authenticity, much less the quality, of what it sells. It does not oversee the sellers who have flocked to its site in any organized way. That has resulted in a kind of lawlessness. Publishers, writers and groups such as the Authors Guild said counterfeiting of books on Amazon had surged. The company has been reactive rather than proactive in dealing with the issue, often taking action only when a buyer complains. Many times, they added, there is nowhere to appeal and their only recourse is to integrate even more closely with Amazon."
This was not the first instance of a counterfeit book appearing on Amazon. According to the New York Post, the problem also encompasses plagiarized books; author Martin Kleppmann said that Amazon was selling pirated copies of his textbook with "pages overlapping" and bleeding ink, making the book unreadable and sparking negative reviews. In 2019, InterVarsity Press announced that counterfeiters had sold $240,000 worth of fake copies of Tish Harrison Warren's Liturgy of the Ordinary on Amazon—as many as 20,000 copies, compared to an estimated 121,000 legitimate copies sold by IVP to that point.
According to a 2019 Vox article, Amazon benefits from the sale of counterfeit books. The article citing a small-press publisher forced to partner with Amazon to return legitimate books to the market: "Bill Pollock, founder of the San Francisco-based programming and science guide publisher No Starch, told the New York Times that this solution was just putting even more onus on rights holders to protect themselves: 'Why should we be responsible for policing Amazon for fakes? That’s their job'. No Starch said that it was spending '$3,000 a month and rising' to keep its search placement higher than the people who are copying it."
Third-party marketplace
A 2019 Wall Street Journal (WSJ) investigation found third-party retailers selling over 4,000 unsafe, banned, or deceptively-labeled products on Amazon.com. When customers sued Amazon for unsafe products sold by third-party sellers on Amazon.com, Amazon's legal defense has been that it is not the seller and cannot be held liable. Wirecutter reported in 2020 that over a several-month period, they "were able to purchase items through Amazon Prime that were either confirmed counterfeits, lookalikes unsafe for use, or otherwise misrepresented." CNBC reported in 2019 that Amazon third-party sellers regularly sold expired food products, and the size of Amazon Marketplace has made policing the platform difficult for the company.
By 2020, third-party sellers accounted for 54 percent of sales on Amazon platforms. In 2019, Amazon earned $54 billion in fees from third-party retailers for seller services.
Plagiarism in Kindle Direct Publishing
Nora Roberts, an American romance author who has had a number of titles of hers plagiarized and re-published through Kindle Direct Publishing, said about Amazon's self-publishing branch: "I'm getting one hell of an education on the sick, greedy, opportunistic culture that games Amazon's absurdly weak system. And everything I learn enrages me ... this culture, this ugly underbelly of legitimate self-publishing is all about content. More, more, more, fast, fast, fast!". Roberts said during an interview with The Guardian that she would sue her unnamed plagiarists. In 2019, the Authors Guild said that "the way KDP and KU [Kindle Unlimited] are set up, which attracts scammers who take advantage of weaknesses in the system to repackage other authors' books and anthologies ... they pass them off as them as 'new' works". Goodreads and Google Books often retain metadata for counterfeits and plagiarized titles after Amazon removes them from its sales platforms, which leads to improper author attribution, ambiguity and reader confusion.
Amazon maintains that it checks for plagiarism by monitoring user accounts and checking uploaded files, although critics say that Amazon's system is not robust enough to handle issues such as identity theft, minors accessing the platform, or internet anonymity. The Urban Writers said that "Amazon is extremely sensitive about plagiarized work and, if flagged, your account could be deactivated."
Other writers and reports have been more critical of Amazon's response to plagiarism, noting a number of cases where Amazon did nothing to stop one or more plagiarists from uploading copyrighted files and claiming them as their own, claiming to be the author themselves, uploading stolen information from an author (such as tax numbers or a home address) to falsely claim their identity, claiming public domain works under their own name, and making up names to avoid legal consequences. CNET writer Michelle Starr described a 2012 case where "sci-fi authors C.H. Cherryh and John Scalzi issued Amazon with DMCA takedown notices for books of theirs that one Ibnul Jaif Farabi had uploaded, with titles slightly changed, under his own name. He had also done the same thing with works by deceased authors, such as Robert Heinlein and Arthur C. Clarke, who, of course, are slightly too deceased to notice."
In most cases, Amazon stops publishing (and selling) the titles while retaining metadata on websites such as Goodreads. Rachel Ann Nunes, a writer of Mormon fiction, said in an interview for The Atlantic that emotional stress and reputation damage were even worse than the financial implications of her books being plagiarized: "I felt like I was being attacked ... and when I went on social media, I didn’t know what would be waiting for me." Nunes said that she had been unable to sleep, gained weight, found herself unable to enjoy writing any more, and paid thousands of dollars in legal fees for attempting to catch her plagiarist, who had a number of aliases and uploaded false information to Amazon's databases.
According to Jonathan Bailey of Plagiarism Today, "Amazon doesn't do much to vet the books it publishes. Plagiarism isn't even mentioned in its KDP help files. What this means is that it's trivial to publish almost anything you want regardless of the quality of the work or, in these cases, how original it is. In fact, many complain that Amazon fails to vet works for even simple issues such as formatting and layout. Though Amazon will, sometimes, remove works that violates their terms of service after they get complaints, they're happy to sell the books and reap the profits until they get such a notice. And, from Amazon's perspective, this is completely legal. They are protected by the Digital Millennium Copyright Act (DMCA) as well as other laws, in particular Section 230 of the Communications Decency Act, that basically mean they are under no obligation to vet or check the works they publish. They are legally free to produce and sell books, physical and digital, regardless of whether they are plagiarized, copyright infringing or otherwise illegal."
Vox journalist Kaitlyn Tiffany investigated a bizarre subset of self-published "celebrity biographies" on Amazon in 2019 which were published under the pen name "Matt Green" by Kindle Direct Publishing which contained plagiarized and unauthorized material, often with typos and grammatical errors. Tiffany defended Amazon's approach to content control, however: "Amazon has already quashed quite a few e-book scams. At first, users could download public domain books from sources like Project Gutenberg, upload them, and sell them to readers who didn't know better. A policy change in 2011 put an end to that. In 2012, Gawker's Max Read came across another good one: hundreds of thousands of books that were just compilations of Wikipedia articles with titles like 'Celebrities with Big Dicks'. One author he found was just publishing random data sets like 'The 2007–2012 Outlook for Tufted Washable Scatter Rugs, Bathmats and Sets That Measure 6-Feet by 9-Feet or Smaller in India'". Tiffany wrote that although Amazon is known for rampant scams in its self-publishing subsidiaries, the company tries its best to stop scams when it becomes aware of them; outright plagiarism and other illegal content is difficult to detect. She cited the use of pen names as a problem and agreed with Jonathan Bailey that the Digital Millennium Copyright Act shields Amazon too much from liability for plagiarism or illegal material in published books.
Sale of Wikipedia content as books
The German-speaking press and blogosphere have criticized Amazon for selling tens of thousands of print on demand books which reproduced Wikipedia articles. The books are produced by the American company Books LLC and by three Mauritian subsidiaries of the German publisher VDM: Alphascript Publishing, Betascript Publishing and Fastbook Publishing. Amazon did not acknowledge the issue, including requests by some customers to remove the titles from its catalog. The collaboration between amazon.com and VDM began in 2007.
Removal of books
Amazon removed a book in 2014, described by critics as a "guide to rape", which claimed to reveal how women could be pressured into accepting sexual advances. The company later removed a book by anti-Muslim activist Tommy Robinson.
Its 2015 listing of A MAD World Order, a self-published e-book by Canadian serial killer and rapist Paul Bernardo (who apparently accessed Amazon's self-publishing services through a prison computer), triggered a backlash. Amazon quietly removed the e-book from all its platforms; no print version was ever published, although a metadata record still exists on Goodreads.
The company temporarily banned a book promoting non-mainstream claims about the COVID-19 pandemic and books which promoted COVID-19 cures not sanctioned by US government agencies. In 2021, Amazon removed listings for a 2018 book by conservative philosopher Ryan T. Anderson because it criticized legal protections for transgender people.
Kindle content removal
The New York Times reported in July 2009 that amazon.com had deleted all customer copies of books published in violation of US copyright laws by MobileReference, including Nineteen Eighty-Four and Animal Farm, from users' Kindles. The action was taken without prior notification or permission from individual users. Customers received a refund of the purchase price and, later, an offer of an Amazon gift certificate or a check for $30. The e-books were initially published by MobileReference on Mobipocket for sale in Australia only, because the works had become public domain in that country. When the e-books were automatically uploaded to Amazon by MobiPocket, however, the territorial restriction was not honored and the book was sold in countries (such as the United States) where the copyright term had not expired.
Author Selena Kitt was a victim of Amazon content removal in December 2010; some of her fiction described incest. Amazon said, "Due to a technical issue, for a short window of time three books were temporarily unavailable for re-download by customers who had previously purchased them. When this was brought to our attention, we fixed the problem ..." in an attempt to defuse user complaints about the deletions.
Late in 2013, the online blog The Kernel published several articles about "an epidemic of filth" on Amazon and other e-book storefronts. Amazon then blocked books dealing with incest, bestiality, child pornography, virginity, monsters, and young sex.
Removal of LGBT content
In April 2009, it was reported that some lesbian, gay, bisexual, transgender, feminist, and politically-liberal books were excluded from Amazon's sales rankings. Books and other media were flagged as "adult content", including children's books, self-help books, non-fiction, and non-explicit fiction. As a result, works by E. M. Forster, Gore Vidal, Jeanette Winterson and D. H. Lawrence were un-ranked. The change was first reported on the blog of author Mark R. Probst, who posted an e-mail from Amazon describing a policy of de-ranking "adult" material.
Amazon later said that it had no policy of de-ranking lesbian, gay, bisexual and transgender material, blaming the change first on a "glitch" and then on "an embarrassing and ham-fisted cataloging error" affecting 57,310 books; a hacker claimed responsibility for the metadata loss.
In June 2022, Amazon complied with a UAE government demand to restrict LGBTQ products and search results in the Emirates. Searches with keywords such as "pride", "lgbt", "transgender flag" and "lgbt iphone cases" yielded "no results" in the country. Books which included Nagata Kabi's My Lesbian Experience With Loneliness, Roxane Gay's Bad Feminist and Maia Kobabe's Gender Queer: A Memoir were removed. Amazon said that it had to "comply with the local laws and regulations of the countries in which we operate", but was committed to protect the rights of LGBTQ people.
Medical misinformation
Autism
Amazon has sold a number of items, primarily self-published books, with misinformation and pseudoscience about autism spectrum disorder and Asperger's syndrome. According to Wired journalist Matt Reynolds, "[T]o test the system, we uploaded a fake Kindle book titled How To Cure Autism: A guide to using chlorine dioxide to cure autism. The listing was approved within two hours. When creating the book, Amazon's Kindle publishing service suggested a stock cover image that made it appear as though the book had been approved by the FDA." Reynolds wrote that a number of other real Kindle titles promoting bleach cures and other misinformation were already available on Amazon.
Amazon later pulled self-published titles promoting autism-related anti-vaccination theories from its sales platforms, which Lindsey Bever of The Washington Post said bordered on censorship of legal reading material. News outlets, including NBC and CBS, reported that Amazon was removing the books. Science Alert later reported that Amazon was still selling autism-misinformation books. Misinformation about COVID-19 began appearing on Amazon in 2021, and Senator Elizabeth Warren questioned Amazon CEO Andy Jassy about the company's search algorithms promoting misinformation.
Vaccines
Anti-vaccination and non-evidence-based cancer "cures" have appeared in Amazon books and videos, possibly due to positive reviews posted by supporters of untested methods or gaming of algorithms by truthers. Wired found that Amazon Prime Video contained "pseudoscientific documentaries laden with conspiracy theories and pointing viewers towards unproven treatments".
U.S. Rep. Adam Schiff expressed concern that Amazon was "recommending products and content that discourage parents from vaccinating their children", and the company removed five anti-vaccination documentaries. Amazon also removed 12 books which claimed that bleach could cure conditions which included malaria and childhood autism. This followed an NBC News report about parents who used bleach in an attempt to reverse their children's autism.
AWS outages
Amazon Web Services, a cloud-computing branch of the company, is used by a large number of major Western corporations and other services such as healthcare, media, food delivery and government. A 2021 series of outages caused the temporary shutdown of most of these platforms, which included Amazon subsidiaries, Netflix, Tinder, McDonald's, Sweetgreen, Disney+ and Roku. Some colleges and universities using AWS had to postpone scheduled tests and assignment due dates because of the outages. Amazon delivery drivers could not properly deliver packages, and Amazon tech products such as its Ring doorbell and Alexa stopped working. The host AWS servers are unknown by the general public, so hacking was not suspected. Journalists Aaron Gregg and Drew Harwell criticized the outages: "[T]he disruptions affect millions of people on an increasingly interconnected Web: we are putting more eggs into fewer and fewer baskets. More eggs get broken that way." The cause of the outages was never explained; to Insider, Amazon called them "an AWS service event that affected Amazon Operations and other customers".
Matt Walsh books
Conservative political commentator Matt Walsh has published books considered transphobic , including Johnny the Walrus (a children's allegory about a boy whose parents surgically transition him into a walrus after catching him pretending to be one). Some of the books became bestsellers on Amazon, upsetting the company's employees. Amazon held a discussion for offended employees; others held a "die-in" protest, saying that media transphobia contributed to hate speech, suicide by trans youth, and misconceptions about trans people. Walsh was amused by the reaction of the Amazon employees, noting that Johnny the Walrus had been listed on Amazon as the company's bestselling LGBT book. The book was later moved to a political category, and some Amazon employees said that books promoting transphobia should be banned from the company's platforms.
Treatment of workers
Amazon has been criticized for the quality of its working environment and treatment of its workforce. A group known as The FACE (Former And Current Employees) of Amazon has used social media to criticize the company and accuse it of providing poor working conditions.
Employee mismanagement
Amazon has been accused of mistakenly firing employees on medical leave as no-shows, not fixing an inaccuracy in its payroll systems which resulted in some of its blue- and white-collar employees being underpaid for months, and violating labor law by denying unpaid leave.
Opposition to trade unions
Amazon has opposed efforts by trade unions to organize in the United States and the United Kingdom.
In 2001, 850 employees in Seattle were laid off by Amazon after a unionization drive. The Washington Alliance of Technology Workers (WashTech) accused the company of violating labor law, saying that Amazon managers subjected it to intimidation and propaganda. Amazon denied any link between the unionization effort and the layoffs. That year, Amazon.co.uk hired The Burke Group (a US management consultant) to help in defeating a campaign by the Graphical, Paper and Media Union (GPMU, now part of Unite the Union) to achieve recognition at the Milton Keynes distribution depot. It was alleged that the company victimized or sacked four union members during the 2001 recognition drive and held a series of captive meetings with employees.
In July 2015, the International Association of Machinists and Aerospace Workers union filed a complaint with the National Labor Relations Board (NLRB) against Amazon, alleging that the company engaged in unfair labor practices by surveilling, threatening, and “informing employees that it would be futile to vote for union representation” during a union drive in 2014 and 2015 at an Amazon warehouse in Chester, Virginia. In 2016, Amazon settled the complaint with the NLRB, denying any wrongdoing but agreeing to post a list at the warehouse of 22 forms of union-busting behavior that the company promised not to engage in, including threatening workers with the loss of a job or other reprisals if they were union supporters, interrogating workers about the union, or engaging in surveillance of workers while they participated in union activities.
In 2018, Amazon distributed a 45-minute union-busting training video to managers at Whole Foods, which it had acquired in 2017, which said, "We are not anti-union, but we are not neutral either. We do not believe unions are in the best interest of our customers or shareholders or most importantly, our associates." The video encouraged the reporting of "warning signs" of worker organization which included workers using terms such as "living wage", employees "suddenly hanging out together," and workers showing "unusual interest in policies, benefits, employee lists, or other company information."
In early 2020, Amazon internal documents were leaked which said that Whole Foods was using a heat map to track which of its 510 stores had the highest levels of pro-union sentiment. Factors including racial diversity, proximity to other unions, poverty levels in the surrounding community, and calls to the NLRB were named as contributors to "unionization risk." Data collected on the heat map suggested that stores with low racial and ethnic diversity, especially those in poor communities, were more likely to unionize. Amazon had a job listing for an intelligence analyst to identify and tackle threats to Amazon, including unions.
On 4 December 2020, the NLRB found that Amazon had illegally fired two employees in retaliation for efforts to organize workers.
In April 2021, after most workers in Bessemer, Alabama voted against joining the Retail, Wholesale and Department Store Union, the union asked for a hearing with the NLRB to determine whether the company created "an atmosphere of confusion, coercion and/or fear of reprisals" before the union vote. The vote had been met with "anti-union" signs and mandatory "union education meetings", according to Amazon employee Jennifer Bates. During the vote, President Joe Biden made a speech acknowledging the organizing workers in Alabama and called for "no anti-union propaganda". This was followed by an increase in activity by public-relations staff on Twitter, reportedly at the direction of Jeff Bezos. The tone of some posts led one Amazon engineer to initially suspect that the accounts had been hacked. Some of the criticism of unions came from generic, recently-created accounts rather than known Amazon personalities. One account, which was quickly banned, attempted to use the likeness of YouTuber Tyler Toney from Dude Perfect.
In April 2021, The Intercept reported on a planned internal Amazon messaging app which would ban terms such as "union", "living wage", "freedom", "pay raise" or "restrooms".
In April 2022, Amazon workers in Staten Island voted to form Amazon Labor Union, the company's first legally-recognized union. In August of that year, workers in Albany, New York filed a petition for an election in an attempt to become the fourth unionized warehouse at the time.
In May 2024, workers at an Amazon warehouse in St. Peters, Missouri filed an unfair labor practice charge against the company with the NLRB, accusing the company of using "intrusive algorithms" as part of a surveillance program to deter union organizing at the warehouse.
In June 2024, a group of 104 delivery drivers at Amazon's DIL7 facility in Skokie, Illinois, employed by contractor Four Star Express Delivery as part of Amazon's Delivery Service Partner subcontractor program, and organized with the Teamsters Local 704 union, filed unfair labor practice charges with the NLRB against both Amazon and Four Star Express as a single or joint employer, alleging that their employer terminated employees for organizing a union, surveilled workers attempting to organize, implemented a hiring freeze in response to unionization efforts, suppressed pro-union speech on employee message boards, altered terms of employment in response to union activity, and sought to permanently close the DIL7 facility in response to union organizing.
Wages
During the summer of 2018, Vermont Senator Bernie Sanders criticized Amazon's wages and working conditions in a series of YouTube videos and media appearances. Sanders noted that Amazon had paid no federal income tax the previous year, and solicited stories from Amazon warehouse workers who felt exploited by the company. A story by James Bloodworth described the environment as akin to "a low-security prison", saying that company culture used Orwellian newspeak. Reports cited a finding by New Food Economy that one-third of fulfillment-center workers in Arizona were on the Supplemental Nutrition Assistance Program (SNAP). Responses by Amazon included incentives for employees to tweet positive stories and a statement which called the salary figures used by Sanders "inaccurate and misleading". According to the statement, it was inappropriate of Sanders to refer to SNAP as "food stamps". Sanders and Ro Khanna introduced the Stop Bad Employers by Zeroing Out Subsidies (Stop BEZOS) Act on September 5, 2018, aimed at Amazon and other reported beneficiaries of corporate welfare such as Walmart, McDonald's and Uber. Among the bill's supporters were Tucker Carlson of Fox News and Matt Taibbi, who criticized himself and other journalists for not covering Amazon's contribution to wealth inequality earlier. On October 2, 2018, Amazon announced that its minimum wage for all American employees would be raised to $15 per hour; Sanders congratulated the company for the decision.
In 2023, over 350 workers at Amazon's Coventry warehouse in the United Kingdom walked off the job for a pay raise from £10.50 to £15 an hour. Amazon offered a 50p-per-hour increase, which was rejected by GMB.
Working conditions
Former employees, current employees, the media, and politicians have criticized Amazon for poor working conditions. In 2011, it was publicized that workers had to perform tasks in heat at the Breinigsville, Pennsylvania warehouse. Workers became dehydrated and collapsed, but loading-bay doors were not opened to allow in fresh air because of concerns about theft. Amazon's initial response was to pay for an ambulance to wait outside on call for overheated employees, but the company eventually installed air conditioning in the warehouse.
Some workers ("pickers") who travel the building with a trolley and a handheld scanner "picking" customer orders can walk up to during a workday; if they fall behind on their quotas, they can be reprimanded. The handheld scanner informs an employee in real time about how quickly they are working, and allow team leaders and area managers to track employee location and idle time. The work has been described as dehumanizing and robotic.
For a February 2013 German television report, journalists Diana Löbl and Peter Onneken conducted a covert investigation at an Amazon distribution center in Bad Hersfeld, Hessen. The report highlighted the behavior of some security guards, employed by a third-party company, who had a neo-Nazi background or dressed in neo-Nazi apparel and intimidated foreign and temporary female workers. The third-party security company involved was delisted by Amazon shortly after the report.
In March 2015, it was reported in The Verge that Amazon would remove 18-month non-compete clauses from its US employment contracts for hourly workers after criticism that it unreasonably prevented such employees from finding other work. Short-term temporary workers must sign an agreement prohibiting them from working at any company where they would "directly or indirectly" support any good or service which competes with Amazon, even if they are fired or laid off. A front-page article in The New York Times profiled several former Amazon employees who described a "bruising" workplace culture in which sick workers or those with personal crises were pushed out or unfairly evaluated. Bezos responded with a Sunday memo to employees disputing the Times account of "shockingly callous management practices" which he said would never be tolerated at the company. To boost employee morale, Amazon announced on November 2, 2015, that it would extend its paid leave for new mothers and fathers. The change, for birth and adoptive parents, could be used in conjunction with existing maternity leave and medical leave for new mothers.
In mid-2018, investigations by journalists and media such as The Guardian reported poor working conditions at Amazon's fulfillment centers. In response to criticism that Amazon does not pay its workers a living wage, Jeff Bezos announced that effective November 1, 2018, all US and UK Amazon employees would have a $15-per-hour minimum wage. Amazon would also lobby for a $15-per-hour federal minimum wage. The company also eliminated stock awards and bonuses for hourly employees. A September 11, 2018, article exposed poor working conditions for Amazon's delivery drivers, describing missing wages, lack of overtime pay, favoritism, intimidation, and time constraints which forced drivers to speed and skip meals and bathroom breaks. Amazon uses Netradyne artificial intelligence cameras in some partner vans to monitor safety incidents and driver behavior, which some drivers have criticized. On Black Friday in 2018, Amazon warehouse workers in several European countries (including Italy, Germany, Spain, and the United Kingdom) went on strike to protest inhumane working conditions and low pay.
The Daily Beast reported in March 2019 that emergency services responded to 189 calls from 46 Amazon warehouses in 17 states between 2013 and 2018 relating to suicidal employees. Workers attributed their mental breakdowns to employer-imposed social isolation, aggressive surveillance, and hurried and dangerous working conditions at the warehouses. One former employee said, "It's this isolating colony of hell where people having breakdowns is a regular occurrence."
On July 15, 2019, during Amazon's Prime Day, employees in the United States and Germany went on strike to protest unfair wages and poor working conditions. In August 2019, the BBC reported on Amazon's Twitter ambassadors. Their support for, and defense of, Amazon and its practices have led Twitter users to suspect that they are bots used to dismiss issues affecting Amazon workers. A flurry of new ambassador accounts claiming to be employees defended the company against a March 2021 unionization drive, in some cases falsely claiming that opting out of union dues was impossible. Amazon confirmed that at least one was fake, and Twitter shut down several for violating its terms of use. In November 2019, NBC reported that some contracted Amazon locations, against company policy, allowed people to make deliveries using the badges and passwords of others to circumvent employee background checks and avoid financial penalties (or termination) for sub-standard performance. Amazon's performance quotas were criticized as unrealistic, pressuring drivers to speed, run stop signs, carry overloaded vehicles, and urinate in bottles due to lack of time for bathroom stops; the company generally avoided legal liability for vehicle crashes by using independent contractors.
During the COVID-19 pandemic in March 2020, when the government instructed companies to restrict social contact, Amazon's UK staff was forced to work overtime to meet demand spiked by the disease. A GMB spokesperson said that the company had put "profit before safety". GMB has continued to raise concerns about "grueling conditions, unrealistic productivity targets, surveillance, bogus self-employment and a refusal to recognise or engage with unions unless forced", calling for the UK government and safety regulators to address these issues. In its 2020 statement to US shareholders, Amazon said: "We respect and support the Core Conventions of the International Labour Organization (ILO), the ILO Declaration on Fundamental Principles and Rights at Work, and the United Nations Universal Declaration of Human Rights". Observance of the global human-rights principles has been "long held at Amazon and codifying them demonstrates our support for fundamental human rights and the dignity of workers everywhere we operate". Subcontracted delivery drivers in Canada brought a class-action lawsuit against Amazon Canada in June 2020, saying that $200 million in unpaid wages were owed to them because Amazon retained "effective control" over their work and should legally be considered their employer. On November 27, 2020, Amnesty International said that Amazon workers had faced great health and safety risks since the start of the COVID-19 pandemic. On Black Friday, one of Amazon's busiest periods, the company failed to ensure key safety features in France, Poland, the United Kingdom, and the United States. Workers risked their health and lives to ensure that essential goods were delivered to consumers, helping Amazon achieve record profits.
Amazon said on January 6, 2021, that it planned to build 20,000 affordable houses, spending $2 billion in regions with major facilities. On January 24, 2021, Amazon said that it planned to open a pop-up clinic in partnership with Virginia Mason Franciscan Health in Seattle to vaccinate 2,000 people against COVID-19 on the clinic's first day. The following month, Amazon said that it planned to put cameras in its delivery vehicles. Although many drivers were upset by this decision, the company said that videos would only be sent under certain circumstances. Drivers have said that they sometimes have to urinate and defecate in their vans as a result of pressure to meet quotas. This was denied in a tweet from the official Amazon News account: "You don't really believe the peeing in bottles thing, do you? If that were true, nobody would work for us." Amazon employees then leaked an email to The Intercept indicating that the company was aware that its drivers were doing so: "This evening, an associate discovered human feces in an Amazon bag that was returned to station by a driver. This is the 3rd occasion in the last 2 months when bags have been returned to the station with poop inside." Amazon acknowledged the issue after denying it.
A June 2021 analysis of Occupational Safety and Health Administration data by The Washington Post found that Amazon warehouse jobs "can be more dangerous than at comparable warehouses."
The following month, workers at the New York City warehouse filed a complaint with OSHA describing harsh, 12-hour workdays with sweltering internal temperatures which resulted in fainting workers carried out on stretchers: "Internal temperature is too hot. We have no ventilation, dusty, dirty fans that spread debris into our lungs and eyes, are working at a non-stop pace and [we] are fainting out from heat exhaustion, getting nose bleeds from high blood pressure, and feeling dizzy and nauseous." Many fans provided by the company reportedly did not work, water fountains were often dry, and cooling systems were insufficient. The filers were affiliated with the Amazon Labor Union which was attempting to unionize the warehouse despite company opposition. Similar conditions have been reported elsewhere, such as in Kent, Washington during the 2021 heat wave.
A 2021 report by the National Employment Law Project found that working conditions at Amazon fulfillment centers in Minnesota were dangerous and unsustainable, with more than double the rate of injuries compared to non-Amazon warehouses from 2018 to 2020. In December 2021, after a tornado destroyed an Amazon warehouse in Illinois, the company and its policies were criticized for forcing people to continue working despite the imminent arrival of the tornado; a cellphone ban preventing access to emergency alerts, and company founder Jeff Bezos' apparent insensitivity to the catastrophe as he celebrated his space company's latest achievement and only belatedly acknowledged the loss of life.
In July 2022, a worker in a fulfillment center in Cartaret, New Jersey died due to heat stress, while working through the busy Prime Day week. The temperature outside was recorded at 92 F. Workers across multiple US fulfillment centers have claimed (often by sneaking in thermometers to prove their claims) that indoors temperatures are much higher. Amazon claimed that the worker's death was not related to the heat, however they installed air conditioning a few weeks after the incident.
In March 2022, the Washington state labor department fined Amazon $60,000 for willfully violating workplace safety laws by requiring workers at an Amazon warehouse in Kent, Washington to perform repetitive motions at a fast pace, leading to an increased risk of injury.
In December 2022, OSHA fined Amazon $29,008 for injury record-keeping violations. The agency fined Amazon $60,269 the following month for unsafe conditions in three warehouses, including falling boxes and un-ergonomic and exhausting lifting requirements which resulted in serious lower-back injuries. The fines were low compared to the company's profits, but were the maximum allowed for general duty clause violations of the Occupational Safety and Health Act. In June 2023, Bernie Sanders began a Senate investigation into "dangerous and illegal" working conditions at Amazon's fulfillment centers.
In February 2024, California Occupational Safety and Health Administration fined Amazon $14,625 for not giving air freight workers adequate shade and water on very hot summer days in 2023.
In June 2024, the California Labor Commissioner’s Office fined Amazon $5.9 million, after an investigation of two warehouses east of Los Angeles revealed 59,017 violations of California's 2022 Warehouse Quotas law, which requires employers to disclose productivity quotas to employees and prohibits employers from requiring warehouse workers to meet unsafe quotas.
2018 strike
Spanish unions called on 1,000 Amazon workers to strike from July 10 through Amazon Prime Day, with calls for the strike to be seen worldwide and for customers to follow suit. A Comisiones Obreras (CCOO) union representative said that complaints were based on wage cuts, working conditions, and restrictions on time off. Amazon workers in Poland, Germany, Italy, England, and France have also voiced grievances.
Stop BEZOS Act
On September 5, 2018, Senator Bernie Sanders and Representative Ro Khanna introduced the Stop Bad Employers by Zeroing Out Subsidies (Stop BEZOS) Act, aimed at Amazon and other alleged beneficiaries of corporate welfare such as Walmart, McDonald's, and Uber. This followed several media appearances in which Sanders underscored the need for legislation to ensure that Amazon workers received a living wage. Reports cited a finding by New Food Economy that one third of Amazon warehouse workers in Arizona were on the Supplemental Nutrition Assistance Program (SNAP). Amazon initially released a statement which called this "inaccurate and misleading", but an October 2 announcement affirmed that its minimum wage for all employees would be raised to $15 per hour.
Racial discrimination
Current and former Amazon corporate workers, including former diversity lead Chanin Kelly-Rae, went public in 2021 about alleged systemic discrimination against women and people of color. That year, a number of Black employees filed discrimination lawsuits against the company.
Response to the COVID-19 pandemic
An Amazon warehouse protest on March 30, 2020, in Staten Island led to the firing of its organizer, Chris Smalls. Amazon defended the decision by saying that Smalls was supposed to be in self-isolation at the time, and leading the protest put its other workers at risk. Smalls called the response "ridiculous". New York State attorney general Letitia James was considering legal reaction to the firing, which she called "immoral and inhumane", and asked the National Labor Relations Board to investigate. Smalls accused the company of retaliating against him for organizing a protest. At the Staten Island warehouse, one case of COVID-19 was confirmed by Amazon; workers believed that there were more and said that the company had not cleaned the building, given them suitable protection, or informed them of potential cases. Smalls said that many workers were in risk categories, and the protest demanded that the building be sanitized and the employees paid during that process. Derrick Palmer, another worker at the Staten Island facility, told The Verge that Amazon quickly communicates through text and email when they need staff to work mandatory overtime but waited days to tell employees when a colleague contracted the disease. Amazon said that the Staten Island protest only attracted 15 of the facility's 5,000 workers, but other sources reported much larger crowds. On April 14, 2020, two Amazon employees were fired for "repeatedly violating internal policies" after they circulated an internal petition about health risks for warehouse workers. During the COVID-19 pandemic, Amazon introduced $2-per-hour hazard pay of, changes to overtime pay and unlimited, unpaid time off until April 30, 2020. Hazard pay expired in June 2020 and the paid-time-off policy in May 2022. Amazon introduced temporary restrictions on the sale of non-essential goods, and hired 100,000 more staff in the US and Canada. Some Amazon workers in the US, France, and Italy protested the company's decision to "run normal shifts" despite many COVID-19 infections. In Spain, the company faced legal complaints over its policies. A group of US Senators wrote an open letter to Bezos in March 2020 expressing concerns about worker safety. On May 4, Amazon vice president Tim Bray resigned "in dismay" over the firing of whistleblowers who spoke out about the lack of COVID-19 protections, including shortages of face masks and the company's failure to implement promised temperature checks. Bray called the firings "chickenshit" and said they were "designed to create a climate of fear" in Amazon warehouses. In a Q1 2020 financial report, Jeff Bezos announced that Amazon expected to spend $4 billion or more (predicted operating profit for Q2) on COVID-19 issues: personal protective equipment, higher wages for hourly teams, cleaning of facilities, and expanding Amazon's COVID-19 testing capabilities. From the beginning of 2020 until September of that year, Amazon said that 19,816 employees had contracted COVID-19.
Closure in France
France's SUD trade unions brought a court case against Amazon for unsafe working conditions. On April 15, 2020, the district court in Nanterre ordered the company to limit its deliveries to essential items (including electronics, food, medical or hygienic products, and supplies for home improvement, animals, and offices) or face a fine of €1 million per day. Amazon immediately closed its six warehouses in France, continuing to pay workers but limiting deliveries to items shipped from third-party sellers and warehouses outside France. The company said that the €100,000 fine for each prohibited item shipped could result in billions of dollars in fines, even with a fraction of items misclassified. After losing an appeal and reaching an agreement with labor unions for higher pay and staggered work schedules, the company reopened its French warehouses on May 19 of that year.
Employee dissent
In 2014, former Amazon employee Kivin Varghese threatened to begin a hunger strike to protest Amazon's unfair policies. In November 2016, an Amazon employee jumped from the roof of the company's headquarters office due to unfair treatment at work. Amazon Web Services vice-president Tim Bray resigned in 2020 in protest of the company's treatment of employees who publicly agitated against unhealthy working conditions in Amazon warehouses during the COVID-19 pandemic. In April 2022, The Intercept reported that Amazon's planned internal messaging app would ban words (such as "union", "living wage", "freedom", "pay raise" and "restrooms") which might indicate worker unhappiness.
Forced labor in China
According to a report by the Australian Strategic Policy Institute, a think tank partially funded by the US Department of Defense, Amazon is a company "potentially directly or indirectly benefiting" from forced Uyghur labor.
Treatment of customers
Differential pricing
In September 2000, price discrimination potentially violating the Robinson–Patman Act was found on amazon.com. Amazon offered to sell a buyer a DVD for one price, but after the buyer deleted cookies which identified him as a regular Amazon customer he was offered the same DVD for a substantially lower price. Jeff Bezos apologized for the differential pricing and said that Amazon "never will test prices based on customer demographics". The company said that the difference was the result of a random price test and offered to refund customers who paid higher prices. Amazon had experimented with random price tests in 2000, when customers comparing prices on a bargain-hunter website discovered that Amazon randomly offered the Diamond Rio MP3 player for substantially less than its regular price.
Product substitution
The British consumer organization Which? published information about Amazon Marketplace in the UK which indicates that when small electrical products are sold on the marketplace, the delivered product may not be the same as the product advertised. A test purchase was described in which eleven orders were placed with different suppliers via a single listing. Only one of the suppliers delivered the actual product displayed; two others delivered different, functionally-equivalent products, and eight suppliers delivered products which were quite different and incapable of safely performing the advertised function. The Which? article described how customer reviews of a product were actually a mix of reviews for all the different products, with no way to identify which product came from which supplier. The issue was raised in evidence to the UK Parliament in connection with a new consumer-rights bill.
Items added to baby registries
In 2018, it was reported that Amazon contained sponsored ads pretending to be items on a baby registry. The ads looked similar to actual items on the registry.
WikiLeaks
On December 1, 2010, Amazon stopped hosting the website associated with WikiLeaks; the company did not initially say whether it forced the site to leave. According to The New York Times, "Senator Joseph I. Lieberman, an independent of Connecticut, said Amazon had stopped hosting the WikiLeaks site on Wednesday after being contacted by the staff of the Homeland Security and Governmental Affairs Committee".
In a later press release, Amazon said that the reason was "a violation of [Amazon's] terms of service", because Wikileaks.org was "securing and storing large quantities of data that isn't rightfully theirs, and publishing this data without ensuring it won't injure others." Assange said that WikiLeaks chose Amazon knowing it would probably be kicked off the service "in order to separate rhetoric from reality" and to show that the jurisdiction "suffered a free speech deficit".
Amazon's action led to an open letter from Daniel Ellsberg, who wrote that he was "disgusted by Amazon's cowardice and servility", likening it to "China's control of information and deterrence of whistleblowing", and called for a "broad" and "immediate" boycott of Amazon.
User privacy
The Amazon Echo sparked concern about the company releasing customer data at the behest of government authorities. According to Amazon, voice recordings of customer interactions with the assistant are stored with the possibility of release in response to a warrant or subpoena. Police requested such data during their investigation of the November 22, 2015, death of Victor Collins at the home of James Andrew Bates in Bentonville, Arkansas. Amazon refused to comply at first, but Bates later consented.
Although Amazon has publicly opposed government surveillance, according to Freedom of Information Act requests it has supplied facial-recognition support to law enforcement in the forms of Amazon Rekognition technology and consulting services. Initial testing included Orlando, Florida, and Washington County, Oregon. Amazon offered to connect Washington County with other Amazon government customers interested in Rekognition and a body-camera manufacturer. The ventures are opposed by a coalition of civil-rights groups, who are concerned that they could lead expanded surveillance and abuse; it could automate the identification and tracking of anyone, particularly in the context of potential police body-camera integration. Due to a backlash, the city of Orlando said that it would no longer use the technology but might reconsider at a later date.
A February 17, 2020, BBC Panorama documentary highlighted the amount of data collected by Amazon and its move into surveillance, concerning for politicians and regulators in the US and Europe. On July 16, 2021, the Luxembourg National Commission for Data Protection fined Amazon Europe Core SARL a record €746 million ($888 million) for processing personal data in violation of the EU General Data Protection Regulation (GDPR). The fine, about 4.2 percent of Amazon's reported $21.3 billion 2020 income, and was the largest ever imposed for a violation of the GDPR. Amazon announced that it would appeal the decision.
In June 2023, Amazon agreed to pay the US Federal Trade Commission (FTC) $25 million for violating children's privacy with its Amazon Alexa. The company was accused of keeping Alexa recordings for years and using them illegally to develop algorithms, despite assuring users that it had deleted the recordings.
In September 2024, the FTC released a report summarizing 9 company responses (including from Amazon) to orders made by the agency pursuant to Section 6(b) of the Federal Trade Commission Act of 1914 to provide information about user and non-user data collection (including of children and teenagers) and data use by the companies that found that the companies' user and non-user data practices put individuals vulnerable to identity theft, stalking, unlawful discrimination, emotional distress and mental health issues, social stigma, and reputational harm.
Customer reviews
As customer reviews have become integral to Amazon marketing, reviews have been challenged on accuracy and ethical grounds. In 2004, The New York Times reported that a glitch in the Amazon Canada website revealed that a number of book reviews had been written by authors of their own books or of competing books. Amazon changed its policy of allowing anonymous reviews to one which gave an online credential to reviewers registered with Amazon, although it still allowed them to remain anonymous with pen names. In April 2010, British historian Orlando Figes was found to have posted negative reviews of other authors' books. Two months later, a Cincinnati news blog uncovered a group of 75 Amazon book reviews which had been written and posted by a public-relations company on behalf of its clients. A Cornell University study that year said that 85 percent of Amazon's high-status consumer reviewers "had received free products from publishers, agents, authors and manufacturers." By June 2011, Amazon had moved into the publishing business and begun to solicit positive reviews from established authors in exchange for increased promotion of their books and upcoming projects.
Amazon.com's customer reviews are monitored for indecency, but permit negative comments. Robert Spector, author of the book amazon.com, wrote: "When publishers and authors asked Bezos why amazon.com would publish negative reviews, he defended the practice by claiming that amazon.com was 'taking a different approach ... we want to make every book available – the good, the bad, and the ugly ... to let truth loose'" (Spector 132). Amazon allgedly deleted negative reviews of Scientology-related items, despite the reviews' compliance with comments guidelines.
In November 2012, it was reported that Amazon.co.uk deleted "a wave of reviews by authors of their fellow writers' books in what is believed to be a response to [a] 'sock puppet' scandal." After the listing of Untouchable: The Strange Life and Tragic Death of Michael Jackson, a disparaging biography of Michael Jackson by Randall Sullivan, his fans were organized on social media as "Michael Jackson's Rapid Response Team to Media Attacks" and bombarded Amazon with negative reviews and negative ratings of positive reviews.
Amazon removed a large number of one-star reviews from the listing of former presidential candidate Hillary Clinton's book, What Happened, in 2017. In 2018 and 2020, it was reported that Amazon had allowed sellers to bait-and-switch; after reviewers had praised a product, it would be replaced by a different product while retaining the positive reviews.
In 2022, researchers at UCLA found that millions of products purchase fake positive reviews in private Facebook groups. They indicated the widespread use of fake positive reviews by a variety of products, which substantially increase sales. Amazon said that in 2019, the company spent over $500 million and employed more than 8,000 people to stop fake reviews. In July and August 2022, it sued the administrators of 10,000 Facebook groups which coordinate fake product reviews and several companies involved in faking seller feedback and bypassing sales bans.
Goodreads
Goodreads has had a number of scandals concerning its book-review system, including a practice known as "review-bombing": a form of trolling and extortion to decrease or inflate an author's book ratings. Reasons include cancel culture, financial gain, bullying and harassment, defamation and self-promotion, and traditionally- and self-published authors are targeted. Rin Chupeco, a popular fantasy novelist, has raised concerns that Goodreads leaves moderation primarily in the hands of volunteers with editing privileges and authors marginalized by race, gender, ethnicity and sexual orientation are often targets. Unlike Amazon, Goodreads does not verify if users own (or have access to) books they claim to have read and does not moderate sockpuppetry, trolling or fake accounts. Goodreads imposed new rules restricting reviews which criticize author behavior, such as those mock an author's political affiliation or religion. Goodreads staff are responsible for moderating such content, and some malicious content remains publicly posted until the affected party takes legal action.
IMDb
IMDb (the Internet Movie Database), like Goodreads, does not verify user access to or viewership of media. According to the website, "IMDb ratings are 'accurate' in the sense that they are calculated using a consistent, unbiased formula, but we don't claim that IMDb ratings are 'accurate' in an absolute qualitative sense. We offer these ratings as a simplified way to see what other IMDb users all over the world think about titles listed on our site." IMDb's ratings system has been questioned. Alyssa Bereznak wrote for The Ringer in 2019, "Last week, HBO’s Chernobyl shot to the top of IMDb’s all-time TV rankings, outperforming other mega-popular hits like Breaking Bad, Game of Thrones, and various stoner-friendly seasons of Planet Earth. And as of Tuesday, it had a 9.6-star (out of 10) average rating from more than 200,000 users on the Amazon-owned entertainment site. To the knee-jerk press, the limited series’ ascension was evidence of a historic hit. The Economist ran with the numbers, comparing them to traffic spikes on the "Chernobyl nuclear disaster" Wikipedia page, declaring the show 'the highest-rated TV series ever', and marveling at the reach of its subject matter." Bereznak said that the ratings were primarily by white male users, noting earlier trolling scandals where media with largely female, racialized casts and crew were ranked lower in a form of review manipulation (particularly if the content was political). The debate about whether IMDb's reviews are coming from a mostly-white-male demographic arose again when review manipulation was allegedly used to lower the ratings of Black Panther, which had a mostly-black cast and a racial storyline.
Kate Erbland wrote for IndieWire that the film-aggregation site Rotten Tomatoes experienced the same type of trolling as IMDb for the 2018 Disney film A Wrinkle in Time, which had an ethnically-diverse cast (including Oprah Winfrey). According to Erbland, "there's no foolproof way to verify that anyone offering up an audience review or rating have actually seen it, and everyone knows it. Gaming the system is so easy that it can be weaponized against films and creators by something as lo-fi as a Facebook group, and that problem will likely only become a more sophisticated one as other groups dedicated to bringing down scores attempt to maneuver around roadblocks." Like Goodreads, IMDb has experienced review-bombing; the website halted reviews of the 2022 animated film Lightyear, which includes a same-sex couple briefly kissing.
Other questionable business practices
Tax avoidance
Amazon's taxes were investigated in China, Germany, Poland, South Korea, France, Japan, Ireland, Singapore, Luxembourg, Italy, Spain, United Kingdom, the United States, and Portugal. A report released by Fair Tax Mark in 2019 called the company the "worst" offender for tax avoidance, paying a 12-percent effective tax rate between 2010 and 2018 (in contrast with a 35-percent corporate tax rate in the US during the same period). According to Amazon, it had a 24-percent effective tax rate during that period.
HQ2 bidding war
The announcement of Amazon's plan to build HQ2 (a second headquarters) was met with 238 proposed locations, 20 of which became finalist cities on January 18, 2018. In November of that year, the company was criticized for narrowing this down to "the two richest cities": Long Island City (in New York City) and Arlington, Virginia, in the Washington metropolitan area. Critics, including business professor Scott Galloway, called the bidding war "a con" and a pretext for gaining tax breaks and inside information for the company.
Congresswoman Alexandria Ocasio-Cortez opposed the $1.5 billion in tax subsidies given to Amazon as part of the deal. Ocasio-Cortez said that restoring the city's subway system would be a better use for the money, despite a statement by New York governor Andrew Cuomo that the state would benefit economically. Politico then reported that 1,500 affordable homes had been planned for the land occupied by Amazon's new office. The request by Amazon executives for a helipad at each location was controversial, with a number of New York City Council members calling the proposal frivolous.
Rigged contests
In October 2024, Amazon India was accused of rigging giveaway contests in favour of an individual named Chirag Gupta since 2014.
Relationship with governments
Potential conflicts of interest
In 2013, Amazon secured a contract with the CIA which has been described as a potential conflict of interest involving the Bezos-owned Washington Post and his newspaper's coverage of the CIA. This was followed by a bid for a contract with the Department of Defense. Although critics initially considered the government's preference for Amazon a foregone conclusion, the defense contract was signed with Microsoft.
Censorship
Amazon, "committed to diversity, equity and inclusion", has ceded to the censorship demands of several countries. In 2021, the company's Chinese website complied with an order from the Chinese government to remove customer reviews and ratings for a book about Chinese Communist Party general secretary Xi Jinping's speeches and writings. The book's comments section was also disabled. In 2022, Amazon yielded to a UAE government demand and restricted LGBTQ products on its Emirati website. Documents indicated that, threatened with unknown penalties, Amazon removed searches for over 150 keywords related to LGBTQ products. A number of books were also blocked, including My Lesbian Experience With Loneliness by Nagata Kabi, Gender Queer: A Memoir by Maia Kobabe, and Bad Feminist by Roxane Gay. Amazon said that the company was required to "comply with the local laws and regulations of the countries in which we operate".
Project Nimbus
Project Nimbus is a $1.2 billion agreement in which Amazon and Google will provide Israel and its military with artificial intelligence, machine learning, and other cloud-computing services, including local cloud sites which will "keep information within Israel's borders under strict security guidelines." The contract has been criticized by shareholders and employees concerned that the project may lead to abuses of Palestinian human rights in the Israeli–Palestinian conflict. Concerns have been voiced about how the technology will facilitate the surveillance of Palestinians, unlawful data collection, and the expansion of Israeli settlements.
NHS healthcare data
The UK government has given Amazon access to healthcare information published by the National Health Service. The data will be used by Amazon's Alexa to answer medical questions, although Alexa also uses other sources of information. The material, which excludes patient data, could also allow the company to sell its products. The contract allows Amazon access to information on symptoms, causes, and definitions of conditions and "all related copyrightable content and data and other materials". Amazon can then create "new products, applications, cloud-based services and/or distributed software", from which the NHS will not financially benefit and which can be shared with third parties. The government said that allowing Alexa devices to offer health advice to users will reduce pressure on doctors and pharmacists.
Seattle head tax
In May 2018, Amazon threatened the Seattle City Council about an employee head-tax proposal which would have funded houselessness services and low-income housing. The tax would have cost Amazon about $800 per employee, or 0.7 percent of their average salary. In response, Amazon paused construction on a new building, threatened to limit further investment in the city, and funded a repeal campaign. The measure, which originally passed, was repealed after a costly campaign spearheaded by Amazon.
Tennessee expansion
Incentives from the Metropolitan Council of Nashville and Davidson County to Amazon for the company's new Operations Center of Excellence in Nashville Yards (owned by Southwest Value Partners) have been controversial, including a decision by the Tennessee Department of Economic and Community Development to keep the full extent of the agreement secret. Incentives include "$102 million in combined grants and tax credits for a scaled-down Amazon office building" and "a $65 million cash grant for capital expenditures" in exchange for the creation of 5,000 jobs over a seven-year period.
The Tennessee Coalition for Open Government called for more transparency. The People's Alliance for Transit, Housing, and Employment (PATHE), another local organization, suggested that no public money should be given to Amazon; instead, it should be spent on building more public housing for the working poor and the homeless and investing in more public transportation for city residents. Others suggested that incentives to large corporations do not improve the local economy.
The proposal to give Amazon $15 million in incentives was criticized by the Nashville Firefighters Union and the Nashville chapter of the Fraternal Order of Police in November 2018, who called it "corporate welfare." In February 2019, another $15.2 million in infrastructure was approved by the council. It was opposed by three council members, including Angie Henderson (who called it "cronyism").
USPS agreement
In early 2018, US president Donald Trump repeatedly criticized Amazon's use of the United States Postal Service for the delivery of packages. "I am right about Amazon costing the United States Post Office massive amounts of money for being their Delivery Boy," Trump tweeted. "Amazon should pay these costs (plus) and not have them by the American Taxpayer." Amazon stock shares fell by six percent as a result of Trump's comments. Shepard Smith of Fox News disputed Trump's claims, citing evidence that the USPS was offering below-market prices to all customers and no advantage to Amazon. Analyst Tom Forte said that Amazon's payments to the USPS are not made public, however, and their contract is reportedly "a sweetheart deal".
Partnerships and associations
Hikvision
Amazon has worked with the Chinese technology company Hikvision. According to The Nation, "The United States has considered sanctioning Hikvision, which has provided thousands of cameras that monitor mosques, schools, and concentration camps in Xinjiang."
Palantir hosting
Amazon provides cloud web hosting services via Amazon Web Services (AWS) to Palantir, a data-analysis company which has developed software used to gather data on undocumented immigrants and hosted on Amazon's AWS cloud. In June 2018, Amazon employees signed a letter demanding that the company drop Palantir from AWS. According to Forbes, Palantir "has come under scrutiny because its software has been used by ICE agents to identify and start deportation proceedings against undocumented migrants."
On July 7, 2019, Make the Road New York and local leaders connected with Jews for Racial and Economic Justice led a protest by over 1,000 people in response to Amazon's financial ties to Palantir and its $150 million in contracts with the U.S. Immigration Customs Enforcement Agency (ICE). The protest shut down Amazon's midtown-Manhattan location of Amazon Books and was held on Tisha B'Av, the Jewish day of mourning and fasting which commemorates the destruction of ancient temples in Jerusalem.
Influence on local news
In late May 2020, before its May 27 shareholders' meeting, at least eleven local news stations aired identically-worded segments which spoke positively about Amazon's response to the coronavirus pandemic. Zach Rael, an anchor for the Oklahoma City station KOCO-TV, posted that Amazon had tried to send him the same prepared package. Senator and Amazon critic Bernie Sanders condemned the coverage, calling it propaganda. Most of the provided video was narrated by Amazon public-relations manager Todd Walker. Of the eleven identified channels, WTVG in Toledo, Ohio was the only one that attributed the statements to Walker.
Other legal action
Trademark issues
In 1999, the Amazon Bookstore Cooperative in Minneapolis, Minnesota sued amazon.com for trademark infringement. The cooperative had been using the name "Amazon" since 1970, and reached an out-of-court agreement to share the name with the online retailer.
In 2014, UK courts ruled that Amazon had infringed the trademark of Lush soap. Lush (the soap manufacturer) had not made its products available on Amazon, but the company advertised alternative products via Google searches for "Lush soap".
Alleged libel
In September 2009, Amazon was selling MP3 music downloads falsely suggesting that a well-known Premier League football manager was a sex offender. Despite a campaign urging the retailer to withdraw the item, Amazon cited freedom of speech. The company eventually decided to withdraw the item from its UK website when legal action was threatened.
Alleged release of personal details
In October 2011, actress Junie Hoang filed a $1 million lawsuit against Amazon in Washington's Western District Court for allegedly revealing her age on Amazon subsidiary IMDb with details from her credit card. The lawsuit, which alleged fraud, breach of contract and violation of her private life and consumer rights, said that after joining IMDbPro in 2008 to increase her chances of getting roles, Hoang said that her date of birth had been added to her public profile; she is older than she looks, and received less acting work and earnings. According to Hoang, IMDb refused her request to remove the information in question. All claims against Amazon, and most claims against IMDb, were dismissed by Judge Marsha J. Pechman; the jury found for IMDb on the sole remaining claim. In February 2015, the case against IMDb was under appeal.
IMDb deadnaming
After Nova Scotian actor Elliot Page and American actress Laverne Cox came out as transgender in 2020, IMDb changed its legal policy about proper names on actor biographies; exceptions were made for people who had changed their names, so their birth name would not appear on IMDb profiles. The change was made after an outcry from LGBTQ+ support groups and organizations; GLAAD director of transgender representation Nick Adams told The New York Times, "To reveal a transgender person’s birth name without their explicit permission is an invasion of privacy that only serves to undermine the trans person's true authentic identity, and can put them at risk for discrimination, even violence." GLAAD agreed to support a SAG-AFTRA legal challenge which sought to restrict the personal information that IMDb can publish.
Environmental impact
Amazon has been criticized for a number of negative effects on the environment including, but not limited to, high carbon footprint, high plastic pollution, anti-environmental lobbying, and greenwashing.
The company founded The Climate Pledge in 2019, a commitment to reach net-zero carbon emissions by 2040, for itself and other signatories. Critics have called this greenwashing due to the disconnect between stated goals and on-the-ground impact. Amazon has also been criticized for refusing to disclose their emissions aligned with the Greenhouse Gas Protocol standards, and has consistently been given a rating of F by the Carbon Disclosure Project (CDP).
Amazon has been persecuted for violating environmental and labor laws on multiple occasions. They have often settled out of court.
Traffic congestion
Amazon Prime has been criticized for its vehicles systemically double parking, blocking bike lanes, and otherwise violating traffic laws while dropping off packages, contributing to traffic congestion and endangering other road users.
In popular culture
Books
One of the first books critical of Amazon was a Canadian collection of essays, Against Amazon: Seven Arguments. The book was originally hand-bound and printed in a limited run by author Jorge Carrión before it was picked up by the independent Canadian publisher Biblioasis, when it sold well and began appearing in university bookstores. Another such book was How to Resist Amazon and Why by Danny Caine, published by Raven Books and widely distributed in North America. The book referred to Amazon as "Scamazon" (a portmanteau of "Amazon" and "scam"), and contained information about shopping locally and avoiding Amazon.
Advertising
The Virginia-based Alliance for Main Street Fairness ran a number of television ads in 2011 with an anti-Amazon theme, encouraging customers to shop responsibly. This was partly due to a proposed bill which would have forced Amazon to be pay more taxes.
Canadian resident Ali Haberstroh became frustrated with the number of brick-and-mortar business closures in the country in 2020 and created an advertising website called Not Amazon, which promotes businesses and corporations not affiliated with Amazon. The Guardian published an article about the website that year, by which time Not Amazon had received 350,000 visitors. Amazon had no comment about the article.
Video game
The 2018 browser game You Are Jeff Bezos satirized the extent of Jeff Bezos' wealth, with the player cast as Bezos and tasked with spending his net worth.
See also
Criticism of Walmart
The StoryGraph
Notes
References
Further reading
Amazon
Amazon
Tech sector trade unions | Criticism of Amazon | Technology | 19,950 |
8,757,778 | https://en.wikipedia.org/wiki/2%20Ursae%20Minoris | 2 Ursae Minoris (2 UMi) is a single star a few degrees away from the northern celestial pole. Despite its Flamsteed designation, the star is actually located in the constellation Cepheus. This change occurred when the constellation boundaries were formally set in 1930 by Eugene Delporte. Therefore, the star is usually referred only by its catalog numbers such as HR 285 or HD 5848. It is visible to the naked eye as a faint, orange-hued star with an apparent visual magnitude of 4.244. This object is located 280 light years away and is moving further from the Earth with a heliocentric radial velocity of +8 km/s. It is a candidate member of the Hyades Supercluster.
This is an aging K-type star with a stellar classification of K2 II-III, showing a luminosity class with blended traits of a giant and a bright giant. It has 2.3 times the mass of the Sun and has expanded to 24 times the Sun's radius. The star is radiating around 215 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 4,513 K.
References
External links
Stars — 2 Ursae Minoris
K-type bright giants
K-type giants
Hyades Stream
Cepheus (constellation)
Durchmusterung objects
Ursae Minoris, 02
005372
005372
0285 | 2 Ursae Minoris | Astronomy | 297 |
2,657,405 | https://en.wikipedia.org/wiki/Relational%20frame%20theory | Relational frame theory (RFT) is a psychological theory of human language, cognition, and behaviour. It was developed originally by Steven C. Hayes of University of Nevada, Reno and has been extended in research, notably by Dermot Barnes-Holmes and colleagues of Ghent University.
Relational frame theory argues that the building block of human language and higher cognition is relating, i.e. the human ability to create bidirectional links between things. It can be contrasted with associative learning, which discusses how animals form links between stimuli in the form of the strength of associations in memory. However, relational frame theory argues that natural human language typically specifies not just the strength of a link between stimuli but also the type of relation as well as the dimension along which they are to be related. For example, a tennis ball could be associated with an orange, by virtue of having the same shape, but it is different because it is not edible, and is perhaps a different color. In the preceding sentence, 'same', 'different' and 'not' are cues in the environment that specify the type of relation between the stimuli, and 'shape', 'colour' and 'edible' specify the dimension along which each relation is to be made. Relational frame theory argues that while there is an arbitrary number of types of relations and number of dimensions along which stimuli can be related, the core unit of relating is an essential building block for much of what is commonly referred to as human language or higher cognition.
Several hundred studies have explored many testable aspects and implications of the theory such as the emergence of specific frames in childhood, how individual frames can be combined to create verbally complex phenomena such as metaphors and analogies, and how the rigidity or automaticity of relating within certain domains is related to psychopathology. In attempting to describe a fundamental building block of human language and higher cognition, RFT explicitly states that its goal is to provide a general theory of psychology that can provide a bedrock for multiple domains and levels of analysis.
Relational frame theory focuses on how humans learn language (i.e., communication) through interactions with the environment and is based on a philosophical approach referred to as functional contextualism.
Overview
Introduction
Relational frame theory (RFT) is a behavioral theory of human language. It is rooted in functional contextualism and focused on predicting and influencing verbal behavior with precision, scope and depth.
Relational framing is relational responding based on arbitrarily applicable relations and arbitrary stimulus functions. The relational responding is subject to mutual entailment, combinatorial mutual entailment and transformation of stimulus functions. The relations and stimulus functions are controlled by contextual cues.
Contextual cues and stimulus functions
In human language a word, sentence or a symbol (e.g. stimulus) can have a different meaning (e.g. functions), depending on context.
In terms of RFT, it is said that in human language a stimulus can have different stimulus functions depending on contextual cues.
Take these two sentences for example:
This task is a piece of cake.
Yes, I would like a piece of that delicious cake you've made.
In the sentences above the stimulus "cake" has two different functions. The stimulus "cake" has a figurative function in the presence of the contextual cues "this task; is; piece of". Whereas in the presence of the contextual cues "I; would like; delicious; you've made" the stimulus "cake" has a more literal function. The functions of stimuli are called stimulus functions, Cfunc for short.
When stimulus function refer to physical properties of the stimulus, such as quantity, colour, shape, etc., they are called nonarbitrary stimulus functions. When a stimulus function refers to non-physical properties of the stimulus, such as value, they are called arbitrary stimulus functions. For example, a one dollar bill. The value of the one dollar bill is an arbitrary stimulus function, but the colour green is a nonarbitrary stimulus function of the one dollar bill.
Arbitrarily applicable relational responding
Arbitrarily applicable relational responding is a form of relational responding.
Relational responding
Relational responding is a response to one stimulus in relation to other available stimuli. For example, a lion who picks the largest piece of meat. The deer who picks the strongest male of the pack. In contrast if an animal would always pick the same drinking spot, it is not relational responding (it is not related to other stimuli in the sense of best/worst/larger/smaller, etc.). These examples of relational responding are based on the physical properties of the stimuli. When relational responding is based on the physical properties of the stimuli, such as shape, size, quantity, etc., it is called nonarbitrarily relational responding (NARR).
Arbitrarily applicable relational responding
Arbitrarily applicable relational responding refers to responding based on relations that are arbitrarily applied between the stimuli. That is to say the relations applied between the stimuli are not supported by the physical properties of said stimuli, but for example based on social convention or social whim. For example, the sound "cow" refers to the animal in the English language. But in another language the same animal is referred by a totally different sound. For example, in Dutch is called "koe" (pronounced as coo). The word "cow" or "koe" has nothing to do with the physical properties of the animal itself. It is by social convention that the animal is named this way. In terms of RFT it is said that the relation between the word cow and the actual animal is arbitrarily applied. We can even change these arbitrarily applied relations: Just look at the history of any language, where meanings of words, symbols and complete sentence can change over time and place.
Arbitrarily applicable relational responding is responding based on arbitrarily applied relations.
Mutual entailment
Mutual entailment refers to deriving a relation between two stimuli based on a given relation between those same two stimuli: Given the relation A to B, the relation B to A can be derived.
For example, Joyce is standing in front of Peter. The relation trained is stimulus A in front of stimulus B. One can derive that Peter is behind Joyce. The derived relation is stimulus B is behind stimulus A.
Another example: Jared is older than Jacob. One could derive that Jacob is younger than Jared. Relation trained: stimulus A is older than stimulus B. Relation derived: stimulus B is younger than stimulus A.
Combinatorial mutual entailment
Combinatorial mutual entailment refers to deriving relations between two stimuli, given the relations of those two stimuli with a third stimulus: Given the relation, A to B and B to C, the relations A to C and C to A can be derived.
To go on with the examples above:
Joyce is standing in front of Peter and Peter is standing in front of Lucy. The relations trained in this example are: stimulus A in front of B and stimulus B in front of C. With this it can be derived that Joyce is standing in front of Lucy and Lucy is standing behind Joyce. The derived relations are A is in front of C and C is behind A.
John is older than Jared and Jared is older than Jacob. Stimulus A is older than stimulus B and stimulus B is older than stimulus C. It can be derived that Jacob is younger than Jared and Jared is younger than John. The derived relation becomes stimulus A is older than stimulus C and stimulus C is younger than stimulus A.
Notice that the relations between A and C were never given. They can be derived from the other relations.
Transfer and transformation of stimulus function
A stimulus can have different functions depending on contextual cues. However a stimulus function can change based on the arbitrary relations with that stimulus.
For example, this relational frame: A is more than B and B is more than C.
For now the stimulus functions of these letters are rather neutral. But as soon as C would be labeled 'as very valuable' and 'nice to have', then A would become more attractive than C, based on the relations. Before there was stated anything about C being valuable, A had a rather neutral stimulus function. After giving C an attractive stimulus function, A has become attractive. The attractive stimulus function has been transferred from C to A through the relations between A, B and C. And A has had a transformation of stimulus function from neutral to attractive.
The same can be done with aversive stimulus function as danger instead of valuable, in saying that C is dangerous, A becomes more dangerous than C based on the relations.
Development
RFT is a behavioral account of language and higher cognition. In his 1957 book Verbal Behavior, B.F. Skinner presented an interpretation of language. However, this account was intended to be an interpretation as opposed to an experimental research program, and researchers commonly acknowledge that the research products are somewhat limited in scope. For example, Skinner's behavioral interpretation of language has been useful in some aspects of language training in developmentally disabled children, but it has not led to a robust research program in the range of areas relevant to language and cognition, such as problem-solving, reasoning, metaphor, logic, and so on. RFT advocates are fairly bold in stating that their goal is an experimental behavioral research program in all such areas, and RFT research has indeed emerged in a large number of these areas, including grammar.
In a review of Skinner's book, linguist Noam Chomsky argued that the generativity of language shows that it cannot simply be learned, that there must be some innate "language acquisition device". Many have seen this review as a turning point, when cognitivism took the place of behaviorism as the mainstream in psychology. Behavior analysts generally viewed the criticism as somewhat off point, but it is undeniable that psychology turned its attention elsewhere and the review was very influential in helping to produce the rise of cognitive psychology.
Despite the lack of attention from the mainstream, behavior analysis is alive and growing. Its application has been extended to areas such as language and cognitive training. Behavior analysis has long been extended as well to animal training, business and school settings, as well as hospitals and areas of research.
RFT distinguishes itself from Skinner's work by identifying and defining a particular type of operant conditioning known as arbitrarily applicable derived relational responding (AADRR). In essence, the theory argues that language is not associative but is learned and relational. For example, young children learn relations of coordination between names and objects; followed by relations of difference, opposition, before and after, and so on. These are "frames" in the sense that once relating of that kind is learned, any event can be related in that way mutually and in combination with other relations, given a cue to do so. This is a learning process that to date appears to occur only in humans possessing a capacity for language: to date relational framing has not yet been shown unambiguously in non-human animals despite many attempts to do so. AADRR is theorized to be a pervasive influence on almost all aspects of human behavior. The theory represents an attempt to provide a more empirically progressive account of complex human behavior while preserving the naturalistic approach of behavior analysis.
Evidence
Approximately 300 studies have tested RFT ideas. Supportive data exists in the areas needed to show that an action is "operant" such as the importance of multiple examples in training derived relational responding, the role of context, and the importance of consequences. Derived relational responding has also been shown to alter other behavioral processes such as classical conditioning, an empirical result that RFT theorists point to in explaining why relational operants modify existing behavioristic interpretations of complex human behavior. Empirical advances have also been made by RFT researchers in the analysis and understanding of such topics as metaphor, perspective taking, and reasoning.
Proponents of RFT often indicate the failure to establish a vigorous experimental program in language and cognition as the key reason why behavior analysis fell out of the mainstream of psychology despite its many contributions, and argue that RFT might provide a way forward. The theory is still somewhat controversial within behavioral psychology, however. At the current time the controversy is not primarily empirical since RFT studies publish regularly in mainstream behavioral journals and few empirical studies have yet claimed to contradict RFT findings. Rather the controversy seems to revolve around whether RFT is a positive step forward, especially given that its implications seem to go beyond many existing interpretations and extensions from within this intellectual tradition.
Applications
Acceptance and commitment therapy
RFT has been argued to be central to the development of the psychotherapeutic tradition known as acceptance and commitment therapy and clinical behavior analysis more generally. Indeed, the psychologist Steven C Hayes was involved with the creation of both acceptance and commitment therapy and RFT, and has credited them as inspirations for one another. However, the extent and exact nature of the interaction between RFT as basic behavioral science and applications such as ACT has been an ongoing point of discussion within the field.
Gender constructs
Queer theorist and ACT therapist Alex Stitt observed how relational frames within a person's language development inform their cognitive associations pertaining to gender identity, gender role, and gender expression. How rigid or flexible a person is with their relational frames, Stitt proposed, will determine how adaptable their concept of gender is within themselves, and how open they are to gender diversity. Children, for example, may adhere to the rigid hierarchical frame "males are boys, and boys have short hair" leading to the false inference that anyone who has short hair is male. Likewise, children may adhere to oppositional frames, leading to false notions like the opposite of a lemon is a lime, the opposite of a cat is a dog, or the opposite of a man is a woman. Stitt observes that adults struggling with gender related issues within themselves, often hyperfocus on causal frames in an attempt to explain gender variance, or frames of comparison and distinction, potentially resulting in feelings of isolation and alienation.
Autism spectrum disorder
RFT provides conceptual and procedural guidance for enhancing the cognitive and language development capability (through its detailed treatment and analysis of derived relational responding and the transformation of function) of early intensive behavior intervention (EIBI) programs for young children with autism and related disorders. The Promoting the Emergence of Advanced Knowledge (PEAK) Relational Training System is heavily influenced by RFT.
Evolution science
More recently, RFT has also been proposed as a way to guide discussion of language processes within evolution science, whether within evolutionary biology or evolutionary psychology, toward a more informed understanding of the role of language in shaping human social behavior. The effort at integrating RFT into evolution science has been led by, among others, Steven C. Hayes, a co-developer of RFT, and David Sloan Wilson, an evolutionary biologist at Binghamton University. For example, in 2011, Hayes presented at a seminar at Binghamton, on the topic of "Symbolic Behavior, Behavioral Psychology, and the Clinical Importance of Evolution Science", while Wilson likewise presented at a symposium at the annual conference in Parma, Italy, of the Association for Contextual Behavioral Science, the parent organization sponsoring RFT research, on the topic of "Evolution for Everyone, Including Contextual Psychology". Hayes, Wilson, and colleagues have recently linked RFT to the concept of a symbotype and an evolutionarily sensible way that relational framing could have developed has been described.
See also
Cognitive grammar
Cognitive linguistics
Construction grammar
Constructivism (psychological school)
Embodied cognitive science
Enactivism
Image schema
Operant conditioning
Personal construct theory
Schema (psychology)
Situated cognition
References
Further reading
External links
Official website of the Association for Contextual Behavioral Science, which is one of the organizations that most commonly presents new work in RFT
An Introduction to Relational Frame Theory (free, multimedia, open-access, online tutorial)]
Behavioral concepts
Psychological theories | Relational frame theory | Biology | 3,253 |
25,997,162 | https://en.wikipedia.org/wiki/FX.25%20Forward%20Error%20Correction | FX.25 is a protocol extension to the AX.25 Link Layer Protocol. FX.25 provides a Forward Error Correction (FEC) capability while maintaining legacy compatibility with non-FEC equipment. FX.25 was created by the Stensat Group in 2005, and was presented as a technical paper at the 2006 TAPR Digital Communications Conference in Tucson, AZ.
Overview
FX.25 is intended to complement the AX.25 protocol, not replace it. It provides an encapsulation mechanism that does not alter the AX.25 data or functionalities. An error correction capability is introduced at the bottom of Layer 2 in the OSI model.
The AX.25 Link Layer Protocol is extensively used in amateur radio communications. The packets are validated by a 16-bit CRC, and are discarded if one or more errors are detected. In many cases, such as space-to-earth telemetry, the packets are broadcast unidirectionally. No back-channel may be available to request retransmission of errored elements. Consequently, AX.25 links are inherently intolerant of errors.
The FX.25 protocol extension provides an error correction "wrapper" around the AX.25 packet, allowing for removal of errors at the receiving end. Data fields have been carefully chosen to allow reception of the AX.25 packet data within an FX.25 frame by a non-FEC decoder.
Technical Implementation
A composite FX.25 entity is called a "frame," distinguishing it from the AX.25 "packet" contained within. The FX.25 frame contains the following elements:
- Preamble
- Correlation Tag
- AX.25 Packet
- - AX.25 Packet Start
- - AX.25 Packet Body
- - AX.25 Packet Frame Check Sequence (FCS)
- - AX.25 Packet End
- Pad for bit-to-byte alignment
- FEC Check Symbols
- Postamble
The "FEC Codeblock" contains all elements except the Preamble, Correlation Tag, and Postamble. These three elements exist outside of the correction-space for the FEC algorithm. The Preamble and Postamble blocks are variable length, and are included to account for delays typically found in radio links - transmitter "key" to stable operation, receiver squelch latency, etc. The Correlation Tag is a Gold code, and contains inherent error tolerance. This is necessary to provide a "start of frame" marker without requiring a dependency on the FEC capability.
The FEC frame currently implements Reed Solomon error correction algorithms, but is not restricted to these.
Performance
Performance improvement will be a function of AX.25 packet size combined with the noise characteristics of the transmission channel. Initial performance testing involved transmission of 61 FX.25 frames over an interval of about 15 minutes.
- 9 frames were received without errors
- 19 frames were received with correctable errors
- 33 frames were received with uncorrectable errors
15% of the AX.25 packets [9/61] were decodable without the FEC capability
46% of the AX.25 packets [(9+19)/61] were decodable with the FEC capability
References
External links
2006 TAPR DCC webpage
FX.25 Specification (pdf)
FX.25 Presentation Slides from 2006 TAPR DCC (pdf)
FX.25 Google Discussion Group
AX.25 + FEC = FX.25 -- details about the "Dire Wolf" software TNC implementation of FX.25.
Packet radio
Link protocols
Telecommunication protocols | FX.25 Forward Error Correction | Technology | 717 |
69,359,360 | https://en.wikipedia.org/wiki/Nitrogen%20pentahydride | Nitrogen pentahydride, also known as ammonium hydride is a hypothetical compound with the chemical formula NH5. There are two theoretical structures of nitrogen pentahydride. One structure is trigonal bipyramidal molecular geometry type NH5 molecule. Its nitrogen atom and hydrogen atoms are covalently bounded, and its symmetry group is D3h. Another predicted structure of nitrogen pentahydride is an ionic compound, composed of an ammonium ion and a hydride ion (NH4+H−). Until now, no one has synthesized this substance, or proved its existence, and related experiments have not directly observed nitrogen pentahydride. It is only speculated that it may be a reactive intermediate based on reaction products. Theoretical calculations show this molecule is thermodynamically unstable. The reason might be similar to the instability of nitrogen pentafluoride, so the possibility of its existence is low. However, nitrogen pentahydride might exist in special conditions or high pressure. Nitrogen pentahydride was considered for use as a solid rocket fuel for research in 1966.
Research and attempts
Some studies believe that nitrogen pentahydride may exist in the formation of other metal atoms crystal lattice, such as mercury and lithium. There are also related studies to explore the possibility of a substitution reaction with ammonium halide. There are also attempts to react ammonium and deuterium to produce the pentahydride, however some experiments show that it may only be a reactive intermediate, which will immediately decompose into ammonia and hydrogen, and the same is true for experiments using deuterium. However, all the studies above are only theoretical calculations, the existence of nitrogen pentahydride has not been observed, and this substance has not been shown to exist.
An experimental attempted to do a displacement reaction between ammonium trifluoroacetate and lithium hydride in the molten state, in order to study the possibility of the existence of nitrogen pentahydride:
CF3COONH4 + LiH → CF3COOLi + [NH4H]
In the reaction between ammonium trifluoroacetate and lithium deuteride, the product ammonia contains 85% of ordinary ammonia and 15% of monodeuterated ammonia. The product hydrogen contains 66% of hydrogen deuteride, 21% of hydrogen gas and 13% of deuterium gas. In the product collected using tetradeuterated ammonium trifluoroacetate and lithium hydride, ammonia contains ND3, NHD2 and NH2D, while hydrogen contains 68% of hydrogen deuteride, 18% of hydrogen gas and 14% of deuterium gas. Therefore, it is speculated that the reaction may have two routes: one is to directly decompose into ammonia and hydrogen, the other is to first generate ammonium deuteride reactive intermediates, partly by forming deuterium anions and hydrogen cations to form deuterated hydrogen and ammonia and by the formation of hydride ions or deuterium cations to decompose into hydrogen or deuterium gas.
But it immediately decomposed into hydrogen and ammonia, and it was impossible to prove its existence. Experiments with deuterium still get the same results:
[NH4H] → NH3 + H2
Structure
Several papers have conducted theoretical calculations on nitrogen pentahydride, and believe that nitrogen pentahydride is unlikely to form ionic crystals of hydride and ammonium ions. However, it is possible that hydrogen is connected to one of the hydrogen atoms of ammonium. It may also be similar to nitrogen pentafluoride, forming a three-center two-electron bond similar to carbonium ions, or those five hydrogen atoms are arranged in a triangular bipyramid structure around the nitrogen atom.
Related compounds
A compound that is similar to nitrogen pentahydride is the theoretical nitrogen pentafluoride. Its structure is assumed to be tetrafluoroammonium fluoride (NF4+F−). Similarly to nitrogen pentahydride, it is a compound of nitrogen and five of the same atom, but nitrogen pentafluoride is also a hypothetical compound, still never synthesized and only theoretical research exist. Other pnictogen pentahydrides are theoretically more stable, such as phosphorus pentahydride (PH4H) which is more stable than nitrogen pentahydride but still unstable to decomposition to phosphine and hydrogen gas. Its organic derivatives (phosphoranes) are more stable, such as stable pentaphenylphosphorus (Ph5P). Other heavier pnictogen pentahydrides are more likely to exist, such as the theoretical arsenic pentahydride.
References
Ammonium compounds
Hydrides
Hypothetical chemical compounds | Nitrogen pentahydride | Chemistry | 1,017 |
5,202,789 | https://en.wikipedia.org/wiki/Stress%20relaxation | In materials science, stress relaxation is the observed decrease in stress in response to strain generated in the structure. This is primarily due to keeping the structure in a strained condition for some finite interval of time hence causing some amount of plastic strain. This should not be confused with creep, which is a constant state of stress with an increasing amount of strain.
Since relaxation relieves the state of stress, e equipment reactions. Thus, relaxation has the
same effect as cold springing, except it occurs over a longer period of time.
The amount of relaxation which takes place is a function of time, temperature and stress level, thus the actual effect it has on the system is not precisely known, but can be bounded.
Stress relaxation describes how polymers relieve stress under constant strain. Because they are viscoelastic, polymers behave in a nonlinear, non-Hookean fashion. This nonlinearity is described by both stress relaxation and a phenomenon known as creep, which describes how polymers strain under constant stress. Experimentally, stress relaxation is determined by step strain experiments, i.e. by applying a sudden one-time strain and measuring the build-up and subsequent relaxation of stress in the material (see figure), in either extensional or shear rheology.
Viscoelastic materials have the properties of both viscous and elastic materials and can be modeled by combining elements that represent these characteristics. One viscoelastic model, called the Maxwell model predicts behavior akin to a spring (elastic element) being in series with a dashpot (viscous element), while the Voigt model places these elements in parallel. Although the Maxwell model is good at predicting stress relaxation, it is fairly poor at predicting creep. On the other hand, the Voigt model is good at predicting creep but rather poor at predicting stress relaxation (see viscoelasticity).
The extracellular matrix and most tissues are stress relaxing, and the kinetics of stress relaxation have been recognized as an important mechanical cue that affects the migration, proliferation, and differentiation of embedded cells.
Stress relaxation calculations can differ for different materials:
To generalize, Obukhov uses power dependencies:
where is the maximum stress at the time the loading was removed (t*), and n is a material parameter.
Vegener et al. use a power series to describe stress relaxation in polyamides:
To model stress relaxation in glass materials Dowvalter uses the following:
where is a material constant and b and depend on processing conditions.
The following non-material parameters all affect stress relaxation in polymers:
Magnitude of initial loading
Speed of loading
Temperature (isothermal vs non-isothermal conditions)
Loading medium
Friction and wear
Long-term storage
See also
Creep
Viscoelasticity
Standard Linear Solid Model
Burgers material
Maxwell material
Kelvin–Voigt material
References
Materials science | Stress relaxation | Physics,Materials_science,Engineering | 568 |
36,388,598 | https://en.wikipedia.org/wiki/Electrical%20isolation%20test | In electrical engineering, an electrical isolation test is a direct current (DC) or alternating current (AC) resistance test that is performed on sub-systems of an electronic system to verify that a specified level of isolation resistance is met. Isolation testing may also be conducted between one or more electrical circuits of the same subsystem. The test often reveals problems that occurred during assembly, such as defective components, improper component placement, and insulator defects that may cause inadvertent shorting or grounding to chassis, in turn, compromising electrical circuit quality and product safety.
Isolation resistance measurements may be achieved using a high input impedance ohmmeter, digital multimeter (DMM) or current-limited Hipot test instrument. The selected equipment should not over-stress sensitive electronic components comprising the subsystem. The test limits should also consider semiconductor components within the subsystem that may be activated by the potentials imposed by each type of test instrumentation. A minimum acceptable resistance value is usually specified (typically in the mega ohm (MΩ) range per circuit tested). Multiple circuits having a common return may be tested simultaneously, provided the minimum allowable resistance value is based on the number of circuits in parallel.
Five basic isolation test configurations exist:
Single Un-referenced End-Circuit – isolation between one input signal and circuit chassis/common ground.
Multiple Un-referenced End-Circuits with a single return – isolation between several input signals and circuit chassis/common ground.
Subsystem with Isolated Common – isolation between signal input and common ground.
Common Chassis Ground – isolation between circuit common and chassis (chassis grounded).
Isolated Circuit Common – isolation between circuit common and chassis (chassis floating).
Isolation measurements are made with the assembly or subsystem unpowered and disconnected from any support equipment.
See also
Dielectric withstand test
Electrical breakdown
Galvanic isolation
References
Electrical tests | Electrical isolation test | Engineering | 385 |
18,400,581 | https://en.wikipedia.org/wiki/Chelating%20resin | Chelating resins are a class of ion-exchange resins. They are almost always used to bind cations, and utilize chelating agents covalently attached to a polymer matrix. Chelating resins have the same bead form and polymer matrix as usual ion exchangers. Their main use is for pre-concentration of metal ions in a dilute solution. Chelating ion-exchange resins are used for brine decalcification in the chlor-alkali industry, the removal of boron from potable water, and the recovery of precious metals in solutions.
Properties and structure
Chelating resins operate similarly to ordinary ion-exchange resins.
Most chelating resins are polymers (copolymers to be precise) with reactive functional groups that chelate to metal ions. The variation in chelating resins arises from the nature of the chelating agents pendant from the polymer backbone. Dowex chelating resin A-1, also known as Chelex 100, is based on iminodiacetic acid in a styrene-divinylbenzene matrix. Dowex A-1 is available commercially and is widely used to determine general properties of chelating resins such as rate determining step and pH dependence, etc. Dowex A-1 is produced from chloromethylated styrene-divinylbenzene copolymer via amination with aminodiacetic acid.
Poly metal chelating resin has almost negligible affinity to both alkali and alkaline earth metals; small quantities of resin can be utilized to concentrate trace metals in natural water systems or biological fluids, in which there are three or four orders of magnitude greater alkali and alkaline earth metal concentration than the trace metal concentrations.
Other functional groups bound to chelating resins are aminophosphonic acids, thiourea, and 2-picolylamine.
Application in heavy metal remediation
Soil contaminated with heavy metals including radionuclides is mitigated primarily using chelating resins.
Chelating polymers (ion-exchange resins) were proposed for maintenance therapy of pathologies accompanied by iron accumulation, such as hereditary hemochromatosis (iron overload) or Wilson's disease (copper overload), by chelating the metal ions in GIT and thus limiting its biological availability.
References
Additional resources
Yang, Dong, Xijun Chang, Yongwen Liu, and Sui Wang. "Synthesis and Efficiency of a Spherical Macroporous Epoxy-Polyamide Chelating Resin for Preconcentrating and Separating Trace Noble Metal Ions." Annali di Chimica 95.1-2 (2005): 111-14.
Zougagh, Mohammed, J. M. Cano Pav N, and A. Garcia De Torres. "Chelating Sorbents Based on Silica Gel and Their Application in Atomic Spectrometry." Anal Bioanal Chem Analytical and Bioanalytical Chemistry 381.6 (2005): 1103-113.
R. R. Greenberg" and H. M. Kingston. “Trace Element Analysis of Natural Water Samples by Neutron Activation Analysis with Chelating Resin.” Center for Analytical Chemistry, National Bureau of Standards, Washington, D.C. 20234.
.
Analytical chemistry
Polymers
Resins
Chelating agents | Chelating resin | Physics,Chemistry,Materials_science | 682 |
29,434,769 | https://en.wikipedia.org/wiki/Alternative%20Energy%20Institute | Alternative Energy Institute (also known as AEI) was West Texas A&M University's alternative energy research branch. Formed in 1977, the program was nationally and internationally recognized, and along with research provides education and outreach around the U.S. and the globe.
History
AEI was founded at West Texas State University (now West Texas A&M University) in 1977 by
Dr. Vaughn Nelson, Dr. Earl Gilmore and Dr. Robert Barieau during the 1973 oil crisis. The physics department at West Texas State was already experimenting with wind power and these three individuals took the initiative to found a department to concentrate upon the study of wind. The basic goals of the department were:
To test wind turbine designs.
Improve on current aerodynamic design.
Teach public about the state of wind & solar technology.
First Decade: 1977 - 1987
Initially, much of the organization's focus was on small wind turbine research and improving blade designs. At this time they installed test turbines and water pumping applications throughout Texas. These projects allowed AEI to develop and improve upon blade design theory and production. During this period the organization also provided consulting in Latin America, Jamaica, Hawaii, and Europe. There, AEI trained villages and groups in wind energy systems.
At this time AEI operated from three locations: one off-campus and two on-campus. At these locations they customized testing on blade designs, turbine generator units, and complete designs.
Second Decade: 1987 - 1997
During this decade the organization focused on green building projects. The most notable of these was AEI's Solar Energy Building. Finished in 1993, the building served as the main site for AEI's operations for seventeen years. The building covered all of the organization's energy usage, including an on-site 10 kW Bergey wind turbine installation, and 3 kW of photovoltaics.
Several electric vans were donated to the organization at this time, two of which were maintained for several years. These vans were used to collect data and complete local wind energy projects, as well as to give campus and test site tours.
Starting in 1995, AEI began working with the Texas General Land Office to provide Texas Wind Data to the public. While the GLO data sites have since been decommissioned, the organization still collects, analyzes and publishes Texas Wind Data for the general public.
Now: 2022 - Present
AEI is currently focusing on developing a new degree plan at WTAMU as well as continuing its research on green energy systems. In terms of turbine testing, the organization focuses on small blade and turbine testing, particularly innovative horizontal and vertical axis designs.
In 2010, the AEI test site was moved to the Nance Ranch. At this time, the organization's offices were also moved, to WTAMU's Palo Duro Research Facility.
In the late 90s, AEI also began developing a fortran program called ROTOR. The program could predict theoretical power curves for blade designs and produce screen and printed output of this. The program has been modified a few times since then and is still being used today.
During this time, AEI's Wind Data program has greatly expanded. In addition to working with wind farmers to provide data for the public, the organization also analyzes and publishes data for private organizations. In total, the organizations now collects data from 75 sites scattered across Texas. 50 of these data sites are archived online, 31 of which offer data for public use by researchers and developers.
Education
Courses
Since 2009, AEI has been offering online alternative energy courses at WTAMU on Wind energy, Solar energy and Renewable energy. Currently, WT offers one course per semester, with alternating subjects. The courses are taught by AEI staff and are open to WT students and those seeking certification.
In addition to the online courses offered by WTAMU, AEI has also authored renewable energy textbooks
and educational CDs. The CDs cover the subjects of wind energy, wind turbines, solar energy and wind water pumping. Some CDs are also available in Spanish.
Seminars & Symposiums
Windy Land Owners
Started in 1989, AEI has been giving annual Windy Land Owners seminars. Designed to teach land owners and other interested parties general information about the wind industry, most of the seminars took place in the states surrounding Texas. Due to increased interest, AEI began giving seminars in Texas starting in 2001.
Topics covered at these seminars include:
Wind farm basics
Wind resources in Texas
Potential problem and contract considerations
Starting in 2009, AEI also began offering presentations from the WLO Seminars online for general information.
WEATS
AEI's Solar Energy Building
Launched in 1998, The Wind Energy Applications Training Symposium (WEATS) is an internationally acclaimed workshop for the Native American community. Designed for project planners, developers, utility officials, and engineers directly involved with energy projects, it is both a good resource for networking and developing practical knowledge.
Topics covered at this symposium include:
Practical knowledge and analytical tools for conducting project pre-feasibility and identification analysis
Implementation of small and large wind energy projects
Lectures from National Renewable Energy Laboratory and local experts about the capabilities of the technology and the economic and financial aspects of sustainable project development
Site visits
Community Involvement
In addition to its seminars and workshops, AEI also regularly offers consulting to potential wind farmers and hosts tours of its research facilities.
As part of its community outreach, the organization also presents at the Caprock Science Fair and other local schools, informing students about wind energy via displays, demonstrations and brochures at the elementary, junior high and high school levels.
See also
List of energy storage projects
References
External links
Alternative Energy Institute Official Web Site (Windenergy.org)
Renewable Wind Test Center Official Web Site (Windtestcenter.org)
West Texas A&M University
West Texas A&M University
Energy infrastructure in Texas
Sustainable energy
Energy research institutes
Research institutes in Texas
Research institutes established in 1977 | Alternative Energy Institute | Engineering | 1,176 |
24,005,318 | https://en.wikipedia.org/wiki/C8H17NO | The molecular formula C8H17NO (molar mass: 143.23 g/mol, exact mass: 143.1310 u) may refer to:
Conhydrine
Valnoctamide
Valpromide, or 2-propylpentanamide
Molecular formulas | C8H17NO | Physics,Chemistry | 58 |
65,507,284 | https://en.wikipedia.org/wiki/Cari%20Borr%C3%A1s | Caridad Borrás is a Spanish medical physicist. Her career started in 1964 at the Santa Creu i Sant Pau Hospital in Barcelona. From 1988 to 2000, she was Regional Advisor of the Radiological Health Program and, from 2000 to 2002, Coordinator of Essential Drugs and Technology at the Pan American Health Organization in Washington D.C.
Borrás has actively promoted among health administrators in Latin American and Caribbean countries that medical physicists and radiation protection experts are an essential requirement to achieve high-quality radiation oncology and diagnostic imaging procedures. She has received several awards, including the Edith H. Quimby Lifetime Achievement Award from American Association of Physicists in Medicine.
Early life and education
Caridad Borrás grew up in Barcelona, Spain and did her undergraduate and doctorate studies in Physics at the University of Barcelona, where she obtained the degrees in 1964 and 1974. A Fulbright scholarship financed a stay at Thomas Jefferson University in Philadelphia, Pennsylvania, during which she did doctoral research under the direction of Robert O. Gorson and Robert L. Brent. Her primary interest had been radiation biology as she thought that physics, through radiation, could explain the principles of life.
After receiving her Doctor of Sciences Degree she returned to the United States and took a position at the West Coast Cancer Foundation in San Francisco, California, where she set up quality control and quality assurance programs in diagnostic radiology in several hospitals, something not common for medical physicists in those days.
She got involved in diagnostic radiology committees and AAPM task groups, later developing international cooperation with Latin America. Representing the AAPM International Affairs Committee in 1984 she promoted the formation of ALFIM, the Latin American Federation of Medical Physics.
Since 1988 she directed the Radiological Health Program of the Pan American Health Organization (PAHO) and World Health Organization (WHO) in the Washington DC area. She served as advisor to Latin American Health Ministers and professional societies on the need for medical physics and radiological protection standards to reach high-quality radiation oncology and diagnostic imaging services.
Under her representation PAHO took active part as one of the organizations that prepared the International Basic Safety Standards for Protection Against Ionizing Radiation and for the Safety of Radiation Sources, a document endorsed by FAO, IAEA, ILO, NEA of the OECD, PAHO, and WHO. Once published the first edition in 1996, Borrás was a strong advocate for the implementation of the standards in the region of Latin American and the Caribbean. Borrás was the editor of Organization, Development, Quality Assurance, and Radiation Protection in Radiology Services: Imaging and Radiation Therapy, a textbook published by the PAHO in English in 1996 and in Spanish in 1997.
It addressed to government and hospital administrators, discussing the issues of infrastructure, personnel, training, radiation protection and quality assurance needs in radiation oncology and diagnostic radiology services. In 2000 she was promoted to PAHO Coordinator of Essential Drugs and Technology, until 2002.
Career as a medical physicist
Borrás's career as a medical physicist started in 1964, following her undergraduate schooling and while waiting for a scholarship to pursue studies abroad, at the radiation oncology and nuclear medicine department of the Santa Creu i Sant Pau Hospital in Barcelona. Her activities included the calibration of the cobalt-60 and orthovoltage radiotherapy units, teaching part-time in the radiation oncology department, and radiobiology research, anticipating what would become later the typical work of a medical physicist.
Later, she worked as a radiological physicist in San Francisco, CA; Recife, Pernambuco, Brazil; and in Washington DC. She is board-certified by the American Board of Radiology (ABR), in Radiological Physics, and since 1991 by the American Board of Medical Physics, in Medical Health Physics, She is a member of the American Association of Physicists in Medicine, the American College of Radiology (ACR), the Health Physics Society, the Society of Nuclear Medicine and Molecular Imaging, and the Spanish Medical Physics (SEFM) and Radiation Protection Societies. At the International Organization for Medical Physics she chaired for nine years the Science Committee and at the International Union for Physical and Engineering Sciences in Medicine (IUPESM), co-chaired the Health Technology Task Group. She currently chairs the AAPM International Educational Activities Committee.
She holds an adjunct faculty position at The George Washington University School of Medicine and Health Sciences, and works as a consultant for IAEA, PAHO and WHO and other international organizations.
Awards and honors
Borrás is a Fellow of the American College of Radiology, the American Association of Physicists in Medicine, the International Organization for Medical Physics, the Health Physics Society, and the International Union for Physical and Engineering Sciences in Medicine. She has been given awards by the IUPESM, the SEFM, the AAPM (winner of the 2013 Edith H. Quimby Lifetime Achievement Award, one of the four major awards given by AAPM), the IOMP, the Latin American Medical Physics Association, the American College of Clinical Engineering, the ACR and the ABR.), the IOMP, the Latin American Federation of Medical Physics, the American College of Clinical Engineering, the ACR and the ABR.
Books
Borrás, Caridad (editor), Organization, Development, Quality Assurance, and Radiation Protection in Radiology Services: Imaging and Radiation Therapy, PAHO/WHO, 1997.
Borrás, Caridad, Hanson, Gerald P., Jiménez, Pablo. History of the Radiological Health Program of the Pan American Health Organization: 1960-2006, PAHO/WHO, 2006.
Borrás, Caridad (editor). Defining the Medical Imaging Requirements for a Rural Health Center, Springer Science+Business Media, Singapore, 2017.
See also
List of female scientists in the 21st century
References
Medical physics
Spanish women physicists
Spanish biophysicists
University of Barcelona alumni
Radiation protection
Biophysicists
Scientists from Barcelona | Cari Borrás | Physics | 1,210 |
3,110,402 | https://en.wikipedia.org/wiki/Kappa%20Persei | Kappa Persei or κ Persei, is a triple star system in the northern constellation of Perseus. Based upon an annual parallax shift of 28.93 mas, it is located at a distance of 113 light-years from the Sun.
The system consists of a spectroscopic binary, designated Kappa Persei A, which can be seen with the naked eye, having an apparent visual magnitude of 3.80. The third star, designated Kappa Persei B, is of magnitude 13.50.
Kappa Persei A's two components are designated Kappa Persei Aa (officially named Misam , the traditional name of the entire system) and Ab.
Nomenclature
κ Persei (Latinised to Kappa Persei) is the system's Bayer designation. The designations of the two constituents as Kappa Persei A and B, and those of A's components - Kappa Persei Aa and Ab - derive from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU).
The traditional name comes from the Arabic مِعْصَم miʽṣam 'wrist'.
In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire multiple systems. It approved the name Misam for the component Kappa Persei Aa on 5 September 2017 and it is now so included in the List of IAU-approved Star Names.
In Chinese, (), meaning Mausoleum, refers to an asterism consisting of Kappa Persei, 9 Persei, Tau Persei, Iota Persei, Beta Persei (Algol), Rho Persei, 16 Persei and 12 Persei. Consequently, the Chinese name for Kappa Persei itself is (, .).
Properties
At its distance, the visual magnitude of Kappa Persei is diminished by an extinction factor of 0.06 due to interstellar dust. It has a relatively high proper motion totaling 0.230 arcseconds per year. There is a 76.3% chance that it is a member of the Hyades-Pleiades stream of stars that share a common motion through space.
With an estimated age of 4.58 billion years, Kappa Persei Aa is an evolved G-type giant star with a stellar classification of G9.5 IIIb. It is a red clump giant, which means that it is generating energy at its core through the nuclear fusion of helium. The star has about 1.5 times the mass of the Sun and 9 times the Sun's radius. It radiates 40 times the solar luminosity from its outer atmosphere at an effective temperature of 4,857 K.
Kappa Persei B is at an angular separation of 44.10 arc seconds along a position angle of 319°, as of 2009.
References
G-type giants
Horizontal-branch stars
Spectroscopic binaries
Persei, Kappa
Perseus (constellation)
BD+44 0631
Persei, 27
019476
014668
0941 | Kappa Persei | Astronomy | 642 |
450,085 | https://en.wikipedia.org/wiki/Dolby | Dolby Laboratories, Inc. (often shortened to Dolby Labs and known simply as Dolby) is a British-American technology corporation specializing in audio noise reduction, audio encoding/compression, spatial audio, and HDR imaging. Dolby licenses its technologies to consumer electronics manufacturers.
History
Dolby Labs was founded by Ray Dolby (1933–2013) in London, England, in 1965. In the same year, he invented the Dolby Noise Reduction system, a form of audio signal processing for reducing the background hissing sound on cassette tape recordings. His first U.S. patent on the technology was filed in 1969, four years later. The method was first used by Decca Records in the UK. After this, other companies began purchasing Dolby’s A301 technology, which was the professional noise reduction system used in recording, motion picture, broadcasting stations and communications networks. These companies include BBC, Pye, IBC, CBS Studios, RCA, and Granada.
He moved the company headquarters to the United States (San Francisco, California) in 1976. The first product Dolby Labs produced was the Dolby 301 unit which incorporated Type A Dolby Noise Reduction, a compander-based noise reduction system. These units were intended for use in professional recording studios.
Dolby was persuaded by Henry Kloss of KLH to manufacture a consumer version of his noise reduction. Dolby worked more on companding systems and introduced Type B in 1968.
Dolby also sought to improve film sound. As the corporation's history explains:
Upon investigation, Dolby found that many of the limitations in optical sound stemmed directly from its significantly high background noise. To filter this noise, the high-frequency response of theatre playback systems was deliberately curtailed… To make matters worse, to increase dialogue intelligibility over such systems, sound mixers were recording soundtracks with so much high-frequency pre-emphasis that high distortion resulted.
The first film with Dolby sound was A Clockwork Orange (1971). The company was approached by Stanley Kubrick, who wanted to use Dolby’s noise reduction system to facilitate the film’s extensive mixing. The film went on to use Dolby noise reduction on all pre-mixes and masters, but a conventional optical soundtrack on release prints. Callan (1974) was the first film with a Dolby-encoded optical soundtrack. In 1975, Dolby released Dolby Stereo, which included a noise reduction system in addition to more audio channels (Dolby Stereo could actually contain additional center and surround channels matrixed from the left and right). The first film with a Dolby-encoded stereo optical soundtrack was Lisztomania (1975), although this only used an LCR (Left-Center-Right) encoding technique. The first true LCRS (Left-Center-Right-Surround) soundtrack was encoded on the movie A Star Is Born in 1976. In less than ten years, 6,000 cinemas worldwide were equipped to use Dolby Stereo sound. Dolby reworked the system slightly for home use and introduced Dolby Surround, which only extracted a surround channel, and the more impressive Dolby Pro Logic, which was the domestic equivalent of the theatrical Dolby Stereo. In 2005, Dolby's stereo 4-channel optical theater surround was inducted into the TECnology Hall of Fame, an honor given to "products and innovations that have had an enduring impact on the development of audio technology."
Dolby developed a digital surround sound compression scheme for the cinema. Dolby Stereo Digital (now simply called Dolby Digital) was first featured on the 1992 film Batman Returns. Introduced to the home theater market as Dolby AC-3 with the 1995 laserdisc release of Clear and Present Danger, the format did not become widespread in the consumer market, partly because of extra hardware that was necessary to make use of it, until it was adopted as part of the DVD specification. Dolby Digital is now found in the HDTV (ATSC) standard of the United States, DVD players, and many satellite-TV and cable-TV receivers.
Dolby developed a digital surround sound compression scheme for the TV series The Simpsons.
On February 17, 2005, the company became public, offering its shares on the New York Stock Exchange, under the symbol DLB. On March 15, 2005, Dolby celebrated its 40th anniversary at the ShoWest 2005 Festival in San Francisco.
On January 8, 2007, Dolby announced the arrival of Dolby Volume at the International Consumer Electronics Show.
On June 18, 2010, Dolby introduced Dolby Surround 7.1, and set up theaters worldwide with 7.1 surround speaker setups to deliver theatrical 7.1 surround sound. The first film to be released with this format was Pixar's Toy Story 3 which was later followed by fifty releases using the format. About 80% of films released are now mixed in Dolby Surround 7.1 by default.
In April 2012, Dolby introduced its Dolby Atmos, a new cinematic technology adding overhead sound, first applied in Pixar's motion picture Brave. In July 2014, Dolby Laboratories announced plans to bring Atmos to home theater. The first television show to use the technology on disc was Game of Thrones.
On February 24, 2014, Dolby acquired Doremi Labs for $92.5 million in cash plus an additional $20 million in contingent consideration that may be earned over a four-year period.
In May 2015, Dolby reopened Vine Theatre as a 70-seat showcase theater, known as Dolby @ Vine or Dolby Screening Room Hollywood Vine.
In May 2019, Dolby added Dolby Atmos to hundreds of newer songs in the music industry.
In May 2020, Dolby launched a developer platform, Dolby.io, aimed at providing developers self-service access to Dolby technologies through public APIs. It allows any person, organization, small and big, to integrate in their websites, apps, games, etc. features such as media enhancements and transcoding, spatial audio, high-quality video communication and low-latency streaming.
Technologies
Analog audio noise reduction
Dolby A: professional noise reduction systems for analog reel-to-reel tape and cassettes.
Dolby NR/B/C/S: consumer noise reduction systems for tapes and analog cassettes.
Dolby SR (Spectral Recording): professional four-channel noise reduction system in use since 1986, which improves the dynamic range of analog recordings and transmissions by as much as 25 dB. Dolby SR is utilized by recording and post-production engineers, broadcasters, and other audio professionals. It is also the benchmark in analog film sound, being included today on nearly all 35 mm film prints. On films with digital soundtracks, the SR track is used in cinemas not equipped for digital playback, and it serves as a backup in case of problems with the digital track.
Dolby FM: noise reduction system for FM broadcast radio. Dolby FM was tried by a few radio stations starting with WFMT in 1971. It used Dolby B, combined with 25 microsecond pre-emphasis. A small number of models of tuners and receivers were offered with the necessary decoder built in. In addition, a few cassette deck models appeared that allowed the deck's internal Dolby B decoder to be put in the line level pass-through path, permitting its use with Dolby FM broadcasts. The system was not successful and was on the decline by 1974.
Dolby HX Pro: single-ended system used on high-end tape recorders to increase headroom. The recording bias is lowered as the high-frequency component of the signal being recorded increases, and vice versa. It does nothing to the actual audio that is being recorded, and it does not require a special decoder. Any HX Pro recorded tape will have, in theory, better sound on any deck.
Dolby Advanced Audio: Dolby surround sound, locking preferred volume level, optimizes audio performance for specific PC models and lets turning up the volume to the built-in speakers without distorting the sound.
Audio encoding/compression
Dolby Surround
Dolby Digital (also known as AC-3) is a lossy audio compression format. It supports channel configurations from mono up to six discrete channels (referred to as "5.1"). This format first allowed and popularized surround sound. It was first developed for movie theater sound and spread to Laserdisc and DVD. It has been adopted in many broadcast formats including all North American digital television (ATSC), DVB-T, direct broadcast satellite, cable television, DTMB, IPTV, and surround sound radio services. It is also part of both the Blu-ray and the now-defunct HD DVD standards. Dolby Digital is used to enable surround sound output by most video game consoles. Several personal computers support converting all audio to Dolby Digital for output.
Dolby Digital EX: introduces a matrix-encoded center rear surround channel to Dolby Digital for 6.1 channel output. This center-rear channel is often split to two rear back speakers for 7.1 channel output.
Dolby Digital Plus (also known as E-AC-3) is a lossy audio codec based on Dolby Digital that is backward compatible, but more advanced. The DVD Forum has selected Dolby Digital Plus as a standard audio format for HD DVD video. It supports data rates up to 6 Mbit/s, an increase from Dolby Digital's 640 kbit/s maximum. On Blu-ray, Dolby Digital Plus is implemented differently, as a legacy 640 kbit/s Dolby Digital stream plus an additional stream to expand the surround sound, with a total bandwidth of approximately 1.7 Mbit/s. Dolby Digital Plus is also optimized for limited data-rate environments such as Digital broadcasting.
Dolby Digital Live is a real-time hardware encoding technology for interactive media such as video games. It converts any audio signals on a PC or game console into the 5.1-channel Dolby Digital format and transports it via a single S/PDIF cable. A similar technology known as DTS Connect is available from competitor DTS.
Dolby E: professional coding system optimized for the distribution of surround and multichannel audio through digital two-channel post-production and broadcasting infrastructures, or for recording surround audio on two audio tracks of conventional digital video tapes, video servers, communication links, switchers, and routers. The Dolby E signal does not reach viewers at home. It is transcoded to Dolby Digital at a lower data rate for final DTV transmission.
Dolby Stereo (also known as Stereo A): original analog optical technology developed for 35 mm prints and is encoded with four sound channels: Left/Center/Right (which are located behind the screen) and Surround (which is heard over speakers on the sides and rear of the theatre) for ambient sound and special effects. This technology also employs A-type or SR-type noise reduction, listed above with regards to analog cassette tapes. See also Dolby Surround
Dolby TrueHD: Offers bit-for-bit sound reproduction identical to the studio master. Over seven full-range 24-bit/96 kHz discrete channels are supported (plus an LFE channel, making it 7.1 surround) along with the HDMI interface. Theoretically, Dolby TrueHD can support more channels, but this number has been limited to 8 for HD DVD and Blu-ray Disc.
Dolby Pulse: released in 2009, it is identical to the HE-AAC v2 codec except for the addition of Dolby metadata, which is common for Dolby's other digital audio codecs. This metadata "ensures consistency of broadcast quality."
Dolby AC-4 is a lossy audio compression format that can contain audio channels and/or audio objects.
Dolby Atmos is a suite of technologies for immersive audio having both horizontal and vertical sound placement, using a combination of channel and object-based mixing and delivery. It was first introduced in cinemas with Brave (2012 film). The first game released with Dolby Atmos audio was Star Wars Battlefront (2015 video game). The means of delivering the channels and objects differ given the technical limitations across different media, and the target platform. Dolby Atmos is not a codec; on the consumer market, pre-recorded Dolby Atmos is delivered as an extension to a Dolby TrueHD, Dolby Digital Plus, or Dolby AC-4 stream.
Audio processing
Dolby Headphone: an implementation of virtual surround, simulating 5.1 surround sound in a standard pair of stereo headphones.
Dolby Virtual Speaker: simulates 5.1 surround sound in a setup of two standard stereo speakers.
Dolby Surround, Dolby Pro Logic, Dolby Pro Logic II, Dolby Pro Logic IIx, and Dolby Pro Logic IIz: these decoders expand sound to a greater number of channels. All can decode surround sound that has been matrixed into two channels; some can expand surround sound to a greater number of speakers than the original source material. See the referenced articles for more details on each decoder.
Audistry: sound enhancement technologies.
Dolby Volume: reduces volume level changes.
Dolby Mobile: A version of Dolby's surround sound technology specifically designed for mobile phones, notably the HTC Desire HD, LG Arena and LG Renoir.
Dolby Audio Plug-in for Android: An API packaged as a Java Library that allows Android Developers to take advantage of Dolby Digital Plus Technology embedded into mobile and tablet devices, notably the Fire HD, Fire HDX, and Samsung Galaxy Tab 3 series.
Dolby Voice: Hardware and software products for enterprise-level web conferencing.
Video processing
Dolby Contrast provides enhanced image contrast to LCD screens with LED backlight units by means of local dimming.
Perceptual Quantizer (PQ), published by SMPTE as SMPTE ST 2084, is a transfer function that allows for the display of high dynamic range (HDR) video with a luminance level of up to 10,000cd/m2 and can be used with the Rec. 2020 color space. On August 27, 2015, the Consumer Electronics Association announced the HDR10 Media Profile which uses the Rec. 2020 color space, SMPTE ST 2084, and a bit depth of 10-bits. On August 2, 2016, Microsoft released the Windows 10 Anniversary Update, which supports the HDR10 format with PQ (ST 2084) transfer function and Rec.2020 color space.
Dolby Vision is a content mastering and delivery format similar to the HDR10 media profile. It supports both high dynamic range (HDR) and wide color gamut (ITU-R Rec. 2020 and 2100) at all stages from content creation and production to transmission and playback. Dolby Vision includes the Perceptual Quantizier (SMPTE ST-2084) electro-optical transfer function and supports displays with up to 10,000-nit maximum brightness (4,000-nit in practice). It also provides up to 8K resolution and color depth of up to 12-bits (backwards compatible with current 8-bit and 10-bit displays). Dolby Vision can encode mastering display colorimetry information using static metadata (SMPTE ST 2086) and dynamic metadata (SMPTE ST 2094-10, Dolby format) for each scene or frame of a video. Examples of Ultra HD (UHD) Dolby Vision are available in TV, monitor, mobile devices and theater. Dolby Vision content can be delivered on Ultra HD Blu-ray discs, over conventional broadcasting, OTT, and online streaming media services. Dolby Vision metadata can be carried via HDMI interface versions 1.4b and above. It also supports IPTPQc2 color space, that is similar to ICtCp. Dolby Vision IQ is an update designed to optimize Dolby Vision content according to the brightness of the room.
ICtCp provides an improved color representation that is designed for high dynamic range (HDR) and wide color gamut (WCG). An improved constant luminance is an advantage for color processing operations such as chroma subsampling and gamut mapping where only color information is changed. ICtCp is based on a modification of IPT called ICaCb.
Digital cinema
Dolby Digital Cinema
Dolby Surround 7.1, first introduced theatrically with Toy Story 3, in 2010.
Dolby 3D
Dolby Atmos
Dolby Cinema, a premium cinema concept developed by Dolby Laboratories as a direct competitor to IMAX.
Live sound
Dolby Lake Processor - as of 2009, all Lake products are owned by Lab Gruppen.
Over the years Dolby has introduced several surround sound systems. Their differences are explained below.
Dolby matrix surround systems
Dolby discrete surround systems
Controversy
ATSC
Dolby Digital AC-3 is used as the audio codec for the ATSC standards, though it was standardized as A/52 by the ATSC. It allows the transport of up to five channels of sound with a sixth channel for low-frequency effects (the so-called "5.1" configuration). In contrast, Japanese ISDB HDTV broadcasts use MPEG's Advanced Audio Coding (AAC) as the audio codec, which also allows 5.1 audio output. DVB allows both.
MPEG-2 audio was a contender for the ATSC standard during the "Grand Alliance" shootout, but lost out to Dolby AC-3. The Grand Alliance issued a statement finding the MPEG-2 system to be "essentially equivalent" to Dolby, but only after the Dolby selection had been made. Later, a story emerged that MIT had entered into an agreement with Dolby whereupon the university would be awarded a large sum of money if the MPEG-2 system was rejected. Dolby also offered an incentive for Zenith to switch their vote (which they did); however, it is unknown whether they accepted the offer.
See also
CX (analog noise reduction competitor)
dbx (analog noise reduction competitor)
High Com (analog noise reduction competitor)
DTS (digital soundspace competitor)
Meridian Lossless Packing (lossless coding for DVD-Audio)
SRS Labs (surround sound competitor)
Beats Audio (digital soundspace competitor)
Sony Dynamic Digital Sound (digital soundspace competitor)
Dolby Theatre
THX
References
External links
Audio codecs
Digital audio
Film sound production
High dynamic range
Electronics companies of the United States
Entertainment companies based in California
Technology companies based in the San Francisco Bay Area
Companies based in San Francisco
American companies established in 1965
Electronics companies established in 1965
Technology companies established in 1965
1965 establishments in England
Companies listed on the New York Stock Exchange
2005 initial public offerings
Companies in the S&P 400 | Dolby | Engineering | 3,874 |
18,069,384 | https://en.wikipedia.org/wiki/Regulation%20of%20nanotechnology | Because of the ongoing controversy on the implications of nanotechnology, there is significant debate concerning whether nanotechnology or nanotechnology-based products merit special government regulation. This mainly relates to when to assess new substances prior to their release into the market, community and environment.
Nanotechnology refers to an increasing number of commercially available products – from socks and trousers to tennis racquets and cleaning cloths. Such nanotechnologies and their accompanying industries have triggered calls for increased community participation and effective regulatory arrangements. However, these calls have presently not led to such comprehensive regulation to oversee research and the commercial application of nanotechnologies, or any comprehensive labeling for products that contain nanoparticles or are derived from nano-processes.
Regulatory bodies such as the United States Environmental Protection Agency and the Food and Drug Administration in the U.S. or the Health and Consumer Protection Directorate of the European Commission have started dealing with the potential risks posed by nanoparticles. So far, neither engineered nanoparticles nor the products and materials that contain them are subject to any special regulation regarding production, handling or labelling.
Managing risks: human and environmental health and safety
Studies of the health impact of airborne particles generally shown that for toxic materials, smaller particles are more toxic. This is due in part to the fact that, given the same mass per volume, the dose in terms of particle numbers increases as particle size decreases.
Based upon available data, it has been argued that current risk assessment methodologies are not suited to the hazards associated with nanoparticles; in particular, existing toxicological and eco-toxicological methods are not up to the task; exposure evaluation (dose) needs to be expressed as quantity of nanoparticles and/or surface area rather than simply mass; equipment for routine detecting and measuring nanoparticles in air, water, or soil is inadequate; and very little is known about the physiological responses to nanoparticles.
Regulatory bodies in the U.S. as well as in the EU have concluded that nanoparticles form the potential for an entirely new risk and that it is necessary to carry out an extensive analysis of the risk. The challenge for regulators is whether a matrix can be developed which would identify nanoparticles and more complex nanoformulations which are likely to have special toxicological properties or whether it is more reasonable for each particle or formulation to be tested separately.
The International Council on Nanotechnology maintains a database and Virtual Journal of scientific papers on environmental, health and safety research on nanoparticles. The database currently has over 2000 entries indexed by particle type, exposure pathway and other criteria. The Project on Emerging Nanotechnologies (PEN) currently lists 807 products that manufacturers have voluntarily identified that use nanotechnology. No labeling is required by the FDA so that number could be significantly higher. "The use of nanotechnology in consumer products and industrial applications is growing rapidly, with the products listed in the PEN inventory showing just the tip of the iceberg" according to PEN Project Director David Rejeski . A list of those products that have been voluntarily disclosed by their manufacturers is located here .
The Material Safety Data Sheet that must be issued for certain materials often does not differentiate between bulk and nanoscale size of the material in question and even when it does these MSDS are advisory only.
Democratic governance
Many argue that government has a responsibility to provide opportunities for the public to be involved in the development of new forms of science and technology. Community engagement can be achieved through various means or mechanisms. An online journal article identifies traditional approaches such as referendums, consultation documents, and advisory committees that include community members and other stakeholders. Other conventional approaches include public meetings and "closed" dialog with stakeholders. More contemporary engagement processes that have been employed to include community members in decisions about nanotechnology include citizens' juries and consensus conferences. Leach and Scoones (2006, p. 45) argue that since that “most debates about science and technology options involve uncertainty, and often ignorance, public debate about regulatory regimes is essential.”
It has been argued that limited nanotechnology labeling and regulation may exacerbate potential human and environmental health and safety issues associated with nanotechnology, and that the development of comprehensive regulation of nanotechnology will be vital to ensure that the potential risks associated with the research and commercial application of nanotechnology do not overshadow its potential benefits. Regulation may also be required to meet community expectations about responsible development of nanotechnology, as well as ensuring that public interests are included in shaping the development of nanotechnology.
Community education, engagement and consultation tend to occur "downstream": once there is at least a moderate level of awareness, and often during the process of disseminating and adapting technologies. "Upstream" engagement, by contrast, occurs much earlier in the innovation cycle and involves: "dialogue and debate about future technology options and pathways, bringing the often expert-led approaches to horizon scanning, technology foresight and scenario planning to involve a wider range of perspectives and inputs." Daniel Sarewitz Director of Arizona State University's Consortium on Science, Policy and Outcomes, argues that "by the time new devices reach the stage of commercialization and regulation, it is usually too late to alter them to correct problems." However, Xenos, et al. argue that upstream engagement can be utilized in this area through anticipated discussion with peers. Upstream engagement in this sense is meant to "create the best possible conditions for sound policy making and public judgments based on careful assessment of objective information". Discussion may act as a catalyst for upstream engagement by prompting accountability for individuals to seek and process additional information ("anticipatory elaboration"). However, though anticipated discussion did lead to participants seeking further information, Xenos et al. found that factual information was not primarily sought out; instead, individuals sought out opinion pieces and editorials.
The stance that the research, development and use of nanotechnology should be subject to control by the public sector is sometimes referred to as nanosocialism.
Newness
The question of whether nanotechnology represents something 'new' must be answered to decide how best nanotechnology should be regulated. The Royal Society recommended that the UK government assess chemicals in the form of nanoparticles or nanotubes as new substances. Subsequent to this, in 2007 a coalition of over forty groups called for nanomaterials to be classified as new substances, and regulated as such.
Despite these recommendations, chemicals comprising nanoparticles that have previously been subject to assessment and regulation may be exempt from regulation, regardless of the potential for different risks and impacts. In contrast, nanomaterials are often recognized as 'new' from the perspective of intellectual property rights (IPRs), and as such are commercially protected via patenting laws.
There is significant debate about who is responsible for the regulation of nanotechnology. While some non-nanotechnology specific regulatory agencies currently cover some products and processes (to varying degrees) – by "bolting on" nanotechnology to existing regulations – there are clear gaps in these regimes. This enables some nanotechnology applications to figuratively "slip through the cracks" without being covered by any regulations. An example of this has occurred in the US, and involves nanoparticles of titanium dioxide (TiO2) for use in sunscreen where they create a clearer cosmetic appearance. In this case, the US Food and Drug Administration (FDA) reviewed the immediate health effects of exposure to nanoparticles of TiO2 for consumers. However, they did not review its impacts for aquatic ecosystems when the sunscreen rubs off, nor did the EPA, or any other agency. Similarly the Australian equivalent of the FDA, the Therapeutic Goods Administration (TGA) approved the use of nanoparticles in sunscreens (without the requirement for package labelling) after a thorough review of the literature, on the basis that although nanoparticles of TiO2 and zinc oxide (ZnO) in sunscreens do produce free radicals and oxidative DNA damage in vitro, such particles were unlikely to pass the dead outer cells of the stratum corneum of human skin; a finding which some academics have argued seemed not to apply the precautionary principle in relation to prolonged use on children with cut skin, the elderly with thin skin, people with diseased skin or use over flexural creases. Doubts over the TGA's decision were raised with publication of a paper showing that the uncoated anatase form of TiO2 used in some Australian sunscreens caused a photocatalytic reaction that degraded the surface of newly installed prepainted steel roofs in places where they came in contact with the sunscreen coated hands of workmen. Such gaps in regulation are likely to continue alongside the development and commercialization of increasingly complex second and third generation nanotechnologies.
Nanomedicines are just beginning to enter drug regulatory processes, but within a few decades could comprise a dominant group within the class of innovative pharmaceuticals, the current thinking of government safety and cost-effectiveness regulators appearing to be that these products give rise to few if any nano-specific issues. Some academics (such as Thomas Alured Faunce) have challenged that proposition and suggest that nanomedicines may create unique or heightened policy challenges for government systems of cost-effectiveness as well as safety regulation. There are also significant public good aspects to the regulation of nanotechnology, particularly with regard to ensuring that industry involvement in standard-setting does not become a means of reducing competition and that nanotechnology policy and regulation encourages new models of safe drug discovery and development more systematically targeted at the global burden of disease.
Self-regulation attempts may well fail, due to the inherent conflict of interest in asking any organization to police itself. If the public becomes aware of this failure, an external, independent organization is often given the duty of policing them, sometimes with highly punitive measures taken against the organization.
The Food and Drug Administration notes that it only regulates on the basis of voluntary claims made by the product manufacturer. If no claims are made by a manufacturer, then the FDA may be unaware of nanotechnology being employed.
Yet regulations worldwide still fail to distinguish between materials in their nanoscale and bulk form. This means that nanomaterials remain effectively unregulated; there is no regulatory requirement for nanomaterials to face new health and safety testing or environmental impact assessment prior to their use in commercial products, if these materials have already been approved in bulk form. The health risks of nanomaterials are of particular concern for workers who may face occupational exposure to nanomaterials at higher levels, and on a more routine basis, than the general public.
International law
There is no international regulation of nanoproducts or the underlying nanotechnology. Nor are there any internationally agreed definitions or terminology for nanotechnology, no internationally agreed protocols for toxicity testing of nanoparticles, and no standardized protocols for evaluating the environmental impacts of nanoparticles. Moreover, nanomaterials do not fall within the scope of existing international treaties regulating toxic chemicals.
Since products that are produced using nanotechnologies will likely enter international trade, it is argued that it will be necessary to harmonize nanotechnology standards across national borders. There is concern that some countries, most notably developing countries, will be excluded from international standards negotiations. The Institute for Food and Agricultural Standards notes that “developing countries should have a say in international nanotechnology standards development, even if they lack capacity to enforce the standards". (p. 14).
Concerns about monopolies and concentrated control and ownership of new nanotechnologies were raised in community workshops in Australia in 2004.
Arguments against regulation
Wide use of the term nanotechnology in recent years has created the impression that regulatory frameworks are suddenly having to contend with entirely new challenges that they are unequipped to deal with. Many regulatory systems around the world already assess new substances or products for safety on a case by case basis, before they are permitted on the market. These regulatory systems have been assessing the safety of nanometre scale molecular arrangements for many years and many substances comprising nanometre scale particles have been in use for decades e.g. Carbon black, Titanium dioxide, Zinc oxide, Bentonite, Aluminum silicate, Iron oxides, Silicon dioxide, Diatomaceous earth, Kaolin, Talc, Montmorillonite, Magnesium oxide, Copper sulphate.
These existing approval frameworks almost universally use the best available science to assess safety and do not approve substances or products with an unacceptable risk benefit profile. One proposal is to simply treat particle size as one of the several parameters defining a substance to be approved, rather than creating special rules for all particles of a given size regardless of type. A major argument against special regulation of nanotechnology is that the projected applications with the greatest impact are far in the future, and it is unclear how to regulate technologies whose feasibility is speculative at this point. In the meantime, it has been argued that the immediate applications of nanomaterials raise challenges not much different from those of introducing any other new material, and can be dealt with by minor tweaks to existing regulatory schemes rather than sweeping regulation of entire scientific fields.
A truly precautionary approach to regulation could severely impede development in the field of nanotechnology safety studies are required for each and every nanoscience application. While the outcome of these studies can form the basis for government and international regulations, a more reasonable approach might be development of a risk matrix that identifies likely culprits.
Response from governments
United Kingdom
In its seminal 2004 report Nanoscience and Nanotechnologies: Opportunities and Uncertainties, the United Kingdom's Royal Society concluded that:
Many nanotechnologies pose no new risks to health and almost all the concerns relate to the potential impacts of deliberately manufactured nanoparticles and nanotubes that are free rather than fixed to or within a material... We expect the likelihood of nanoparticles or nanotubes being released from products in which they have been fixed or embedded (such as composites) to be low but have recommended that manufacturers assess this potential exposure risk for the lifecycle of the product and make their findings available to the relevant regulatory bodies... It is very unlikely that new manufactured nanoparticles could be introduced into humans in doses sufficient to cause the health effects that have been associated with [normal air pollution].
but have recommended that nanomaterials be regulated as new chemicals, that research laboratories and factories treat nanomaterials "as if they were hazardous", that release of nanomaterials into the environment be avoided as far as possible, and that products containing nanomaterials be subject to new safety testing requirements prior to their commercial release.
The 2004 report by the UK Royal Society and Royal Academy of Engineers noted that existing UK regulations did not require additional testing when existing substances were produced in nanoparticulate form. The Royal Society recommended that such regulations were revised so that “chemicals produced in the form of nanoparticles and nanotubes be treated as new chemicals under these regulatory frameworks” (p.xi). They also recommended that existing regulation be modified on a precautionary basis because they expect that “the toxicity of chemicals in the form of free nanoparticles and nanotubes cannot be predicted from their toxicity in a larger form and... in some cases they will be more toxic than the same mass of the same chemical in larger form.”
The Better Regulation Commission's earlier 2003 report had recommended that the UK Government:
enable, through an informed debate, the public to consider the risks for themselves, and help them to make their own decisions by providing suitable information;
be open about how it makes decisions, and acknowledge where there are uncertainties;
communicate with, and involve as far as possible, the public in the decision making process;
ensure it develops two-way communication channels; and
take a strong lead over the handling of any risk issues, particularly information provision and policy implementation.
These recommendations were accepted in principle by the UK Government. Noting that there was “no obvious focus for an informed public debate of the type suggested by the Task Force”, the UK government's response was to accept the recommendations.
The Royal Society's 2004 report identified two distinct governance issues:
the “role and behaviour of institutions” and their ability to “minimise unintended consequences” through adequate regulation and
the extent to which the public can trust and play a role in determining the trajectories that nanotechnologies may follow as they develop.
United States
Rather than adopt a new nano-specific regulatory framework, the United States' Food and Drug Administration (FDA) convenes an 'interest group' each quarter with representatives of FDA centers that have responsibility for assessment and regulation of different substances and products. This interest group ensures coordination and communication. A September 2009 FDA document called for identifying sources of nanomaterials, how they move in the environment, the problems they might cause for people, animals and plants, and how these problems could be avoided or mitigated.
The Bush administration in 2007 decided that no special regulations or labeling of nanoparticles were required. Critics derided this as treating consumers like a "guinea pig" without sufficient notice due to lack of labelling.
Berkeley, CA is currently the only city in the United States to regulate nanotechnology. Cambridge, MA in 2008 considered enacting a similar law, but the committee it instituted to study the issue Cambridge recommended against regulation in its final report, recommending instead other steps to facilitate information-gathering about potential effects of nanomaterials.
On December 10, 2008 the U.S. National Research Council released a report calling for more regulation of nanotechnology.
California
Assembly Bill (AB) 289 (2006) authorizes the Department of Toxic Substances Control (DTSC) within the California Environmental Protection Agency and other agencies to request information on environmental and health impacts from chemical manufacturers and importers, including testing techniques.
California
In October 2008, the Department of Toxic Substances Control (DTSC), within the California Environmental Protection Agency, announced its intent to request information regarding analytical test methods, fate and transport in the environment, and other relevant information from manufacturers of carbon nanotubes. DTSC is exercising its authority under the California Health and Safety Code, Chapter 699, sections 57018-57020. These sections were added as a result of the adoption of Assembly Bill AB 289 (2006). They are intended to make information on the fate and transport, detection and analysis, and other information on chemicals more available. The law places the responsibility to provide this information to the Department on those who manufacture or import the chemicals.
On January 22, 2009, a formal information request letter was sent to manufacturers who produce or import carbon nanotubes in California, or who may export carbon nanotubes into the State. This letter constitutes the first formal implementation of the authorities placed into statute by AB 289 and is directed to manufacturers of carbon nanotubes, both industry and academia within the State, and to manufacturers outside California who export carbon nanotubes to California. This request for information must be met by the manufacturers within one year. DTSC is waiting for the upcoming January 22, 2010 deadline for responses to the data call-in.
The California Nano Industry Network and DTSC hosted a full-day symposium on November 16, 2009 in Sacramento, CA. This symposium provided an opportunity to hear from nanotechnology industry experts and discuss future regulatory considerations in California.
On December 21, 2010, the Department of Toxic Substances Control (DTSC) initiated the second Chemical Information Call-in for six nanomaterials: nano cerium oxide, nano silver, nano titanium dioxide, nano zero valent iron, nano zinc oxide, and quantum dots. DTSC sent a formal information request letter to forty manufacturers who produce or import the six nanomaterials in California, or who may export them into the State. The Chemical Information Call-in is meant to identify information gaps of these six nanomaterials and to develop further knowledge of their analytical test methods, fate and transport in the environment, and other relevant information under California Health and Safety Code, Chapter 699, sections 57018-57020. DTSC completed the carbon nanotube information call-in in June 2010.
DTSC partners with University of California, Los Angeles (UCLA), Santa Barbara (UCSB), and Riverside (UCR), University of Southern California (USC), Stanford University, Center for Environmental Implications of Nanotechnology (CEIN), and The National Institute for Occupational Safety and Health (NIOSH) on safe nanomaterial handling practices.
DTSC is interested in expanding the Chemical Information Call-in to members of the brominated flame retardants, members of the methyl siloxanes, ocean plastics, nano-clay, and other emerging chemicals.
European Union
The European Union has formed a group to study the implications of nanotechnology called the Scientific Committee on Emerging and Newly Identified Health Risks which has published a list of risks associated with nanoparticles.
Consequently, manufacturers and importers of carbon products, including carbon nano-tubes will have to submit full health and safety data within a year or so in order to comply with REACH.
A number of European member states have called for the creation of either national or European nanomaterials registries. France, Belgium, Sweden, and Denmark have established national registries of nanomaterials. In addition, the European Commission requested the Europeach Chemicals Agency (ECHA) to create a European Union Observatory for Nanomaterials (EUON) that aims at collecting publicly available information on the safety and markets of nanomaterials and nanotechnology.
Response from advocacy groups
In January 2008, a coalition of over 40 civil society groups endorsed a statement of principles calling for precautionary action related to nanotechnology. The coalition called for strong, comprehensive oversight of the new technology and its products in the International Center for Technology Assessment's report Principles for the Oversight of Nanotechnologies and Nano materials, which states:
Hundreds of consumer products incorporating nano-materials are now on the market, including cosmetics, sunscreens, sporting goods, clothing, electronics, baby and infant products, and food and food packaging. But evidence indicates that current nano-materials may pose significant health, safety, and environmental hazards. In addition, the profound social, economic, and ethical challenges posed by nano-scale technologies have yet to be addressed ... 'Since there is currently no government oversight and no labeling requirements for nano-products anywhere in the world, no one knows when they are exposed to potential nano-tech risks and no one is monitoring for potential health or environmental harm. That's why we believe oversight action based on our principles is urgent' ... This industrial boom is creating a growing nano-workforce which is predicted to reach two million globally by 2015. 'Even though potential health hazards stemming from exposure have been clearly identified, there are no mandatory workplace measures that require exposures to be assessed, workers to be trained, or control measures to be implemented,' explained Bill Kojola of the AFL-CIO. 'This technology should not be rushed to market until these failings are corrected and workers assured of their safety'" also .
The group has urged action based on eight principles. They are (1) A Precautionary Foundation (2) Mandatory Nano-specific Regulations (3) Health and Safety of the Public and Workers (4) Environmental Protection (5) Transparency (6) Public Participation (7) Inclusion of Broader Impacts and (8) Manufacturer Liability.
Some NGOs, including Friends of the Earth, are calling for the formation of a separate nanotechnology specific regulatory framework for the regulation of nanotechnology. In Australia, Friends of the Earth propose the establishment of a Nanotechnology Regulatory Coordination Agency, overseen by a Foresight and Technology Assessment Board. The advantage of this arrangement is that it could ensure a centralized body of experts that are able to provide oversight across the range of nano-products and sectors. It is also argued that a centralized regulatory approach would simplify the regulatory environment, thereby supporting industry innovation. A National Nanotechnology Regulator could coordinate existing regulations related to nanotechnology (including intellectual property, civil liberties, product safety, occupation health and safety, environmental and international law). Regulatory mechanisms could vary from "hard law at one extreme through licensing and codes of practice to 'soft' self-regulation and negotiation in order to influence behavior."
The formation of national nanotechnology regulatory bodies may also assist in establishing global regulatory frameworks.
In early 2008, The UK's largest organic certifier, the Soil Association, announced that its organic standard would exclude nanotechnology, recognizing the associated human and environmental health and safety risks. Certified organic standards in Australia exclude engineered nanoparticles. It appears likely that other organic certifiers will also follow suit. The Soil Association was also the first to declare organic standards free from genetic engineering.
Technical aspects
Size
Regulation of nanotechnology will require a definition of the size, in which particles and processes are recognized as operating at the nano-scale. The size-defining characteristic of nanotechnology is the subject of significant debate, and varies to include particles and materials in the scale of at least 100 to 300 nanometers (nm). Friends of the Earth Australia recommend defining nano-particles up to 300 nanometers (nm) in size. They argue that "particles up to a few hundred nanometers in size share many of the novel biological behaviors of nano-particles, including novel toxicity risks", and that "nano-materials up to approximately 300 nm in size can be taken up by individual cells". The UK Soil Association define nanotechnology to include manufactured nano-particles where the mean particle size is 200 nm or smaller. The U.S. National Nanotechnology Initiative defines nanotechnology as “the understanding and control of matter at dimensions of roughly 1 to 100 nm.
Mass thresholds
Regulatory frameworks for chemicals tend to be triggered by mass thresholds. This is certainly the case for the management of toxic chemicals in Australia through the National pollutant inventory. However, in the case of nanotechnology, nano-particle applications are unlikely to exceed these thresholds (tonnes/kilograms) due to the size and weight of nano-particles. As such, the Woodrow Wilson International Center for Scholars questions the usefulness of regulating nanotechnologies on the basis of their size/weight alone. They argue, for example, that the toxicity of nano-participles is more related to surface area than weight, and that emerging regulations should also take account of such factors.
References
Nanotechnology and the environment
Nanotechnology
Science and law | Regulation of nanotechnology | Materials_science | 5,491 |
2,472,233 | https://en.wikipedia.org/wiki/Thermal%20adhesive | Thermal adhesive is a type of thermally conductive glue used for electronic components and heat sinks. It can be available as a paste (similar to thermal paste) or as a double-sided tape.
It is commonly used to bond integrated circuits to heatsinks where there are no other mounting mechanisms available.
The glue is typically a two-part epoxy resin (usually for paste products) or cyanoacrylate (for tapes). The thermally conductive material can vary including metals, metal oxides, silica or ceramic microspheres. The latter are found in products that have much higher dielectric strength, although this comes at the cost of lower thermal conductivity.
End-user modding heatsinks may be supplied with thermal adhesive attached (usually a piece of tape). For products sold through electronic components distributors this is rarely the case; the adhesives are sold separately to professionals.
See also
Computer cooling
Hot-melt adhesive
Phase-change material
Thermally conductive pad
Thermal paste
List of thermal conductivities
References
Further reading
Adhesives | Thermal adhesive | Physics | 222 |
3,828,419 | https://en.wikipedia.org/wiki/Donsker%27s%20theorem | In probability theory, Donsker's theorem (also known as Donsker's invariance principle, or the functional central limit theorem), named after Monroe D. Donsker, is a functional extension of the central limit theorem for empirical distribution functions. Specifically, the theorem states that an appropriately centered and scaled version of the empirical distribution function converges to a Gaussian process.
Let be a sequence of independent and identically distributed (i.i.d.) random variables with mean 0 and variance 1. Let . The stochastic process is known as a random walk. Define the diffusively rescaled random walk (partial-sum process) by
The central limit theorem asserts that converges in distribution to a standard Gaussian random variable as . Donsker's invariance principle extends this convergence to the whole function . More precisely, in its modern form, Donsker's invariance principle states that: As random variables taking values in the Skorokhod space , the random function converges in distribution to a standard Brownian motion as
Formal statement
Let Fn be the empirical distribution function of the sequence of i.i.d. random variables with distribution function F. Define the centered and scaled version of Fn by
indexed by x ∈ R. By the classical central limit theorem, for fixed x, the random variable Gn(x) converges in distribution to a Gaussian (normal) random variable G(x) with zero mean and variance F(x)(1 − F(x)) as the sample size n grows.
Theorem (Donsker, Skorokhod, Kolmogorov) The sequence of Gn(x), as random elements of the Skorokhod space , converges in distribution to a Gaussian process G with zero mean and covariance given by
The process G(x) can be written as B(F(x)) where B is a standard Brownian bridge on the unit interval.
Proof sketch
For continuous probability distributions, it reduces to the case where the distribution is uniform on by the inverse transform.
Given any finite sequence of times , we have that is distributed as a binomial distribution with mean and variance .
Similarly, the joint distribution of is a multinomial distribution. Now, the central limit approximation for multinomial distributions shows that converges in distribution to a gaussian process with covariance matrix with entries , which is precisely the covariance matrix for the Brownian bridge.
History and related results
Kolmogorov (1933) showed that when F is continuous, the supremum and supremum of absolute value, converges in distribution to the laws of the same functionals of the Brownian bridge B(t), see the Kolmogorov–Smirnov test. In 1949 Doob asked whether the convergence in distribution held for more general functionals, thus formulating a problem of weak convergence of random functions in a suitable function space.
In 1952 Donsker stated and proved (not quite correctly) a general extension for the Doob–Kolmogorov heuristic approach. In the original paper, Donsker proved that the convergence in law of Gn to the Brownian bridge holds for Uniform[0,1] distributions with respect to uniform convergence in t over the interval [0,1].
However Donsker's formulation was not quite correct because of the problem of measurability of the functionals of discontinuous processes. In 1956 Skorokhod and Kolmogorov defined a separable metric d, called the Skorokhod metric, on the space of càdlàg functions on [0,1], such that convergence for d to a continuous function is equivalent to convergence for the sup norm, and showed that Gn converges in law in to the Brownian bridge.
Later Dudley reformulated Donsker's result to avoid the problem of measurability and the need of the Skorokhod metric. One can prove that there exist Xi, iid uniform in [0,1] and a sequence of sample-continuous Brownian bridges Bn, such that
is measurable and converges in probability to 0. An improved version of this result, providing more detail on the rate of convergence, is the Komlós–Major–Tusnády approximation.
See also
Glivenko–Cantelli theorem
Kolmogorov–Smirnov test
References
Probability theorems
Theorems in statistics
Empirical process | Donsker's theorem | Mathematics | 935 |
11,548,156 | https://en.wikipedia.org/wiki/Thyrostroma%20compactum | Thyrostroma compactum is a plant pathogen in the family Botryosphaeriaceae.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Botryosphaeriaceae
Fungi described in 1876
Fungus species | Thyrostroma compactum | Biology | 52 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.