id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
9,124,629 | https://en.wikipedia.org/wiki/Cognitive%20philology | Cognitive philology is the science that studies written and oral texts as the product of human mental processes. Studies in cognitive philology compare documentary evidence emerging from textual investigations with results of experimental research, especially in the fields of cognitive and ecological psychology, neurosciences and artificial intelligence. "The point is not the text, but the mind that made it". Cognitive Philology aims to foster communication between literary, textual, philological disciplines on the one hand and researches across the whole range of the cognitive, evolutionary, ecological and human sciences on the other.
Cognitive philology:
investigates transmission of oral and written text, and categorization processes which lead to classification of knowledge, mostly relying on the information theory;
studies how narratives emerge in so called natural conversation and selective process which lead to the rise of literary standards for storytelling, mostly relying on embodied semantics;
explores the evolutive and evolutionary role played by rhythm and metre in human ontogenetic and phylogenetic development and the pertinence of the semantic association during processing of cognitive maps;
Provides the scientific ground for multimedia critical editions of literary texts.
Among the founding thinkers and noteworthy scholars devoted to such investigations are:
Alan Richardson: Studies Theory of Mind in early-modern and contemporary literature.
Anatole Pierre Fuksas
Benoît de Cornulier
David Herman: Professor of English at North Carolina State University and an adjunct professor of linguistics at Duke University. He is the author of "Universal Grammar and Narrative Form" and the editor of "Narratologies: New Perspectives on Narrative Analysis".
Domenico Fiormonte
François Recanati
Gilles Fauconnier, a professor in Cognitive science at the University of California, San Diego. He was one of the founders of cognitive linguistics in the 1970s through his work on pragmatic scales and mental spaces. His research explores the areas of conceptual integration and compressions of conceptual mappings in terms of the emergent structure in language.
Julián Santano Moreno
Luca Nobile
Manfred Jahn in Germany
Mark Turner
Paolo Canettieri
See also
Artificial intelligence
Cognitive archaeology
Cognitive linguistics
Cognitive poetics
Cognitive psychology
Cognitive rhetoric
Information theory
Philology
References
External links
Rivista di Filologia Cognitiva
CogLit: Literature and Cognitive Linguistics
Cognitive Philology
Institute for Psychological Study of the Arts
Artificial intelligence
Branches of cognitive science
Cognitive psychology
Historical linguistics
Philology
Writing | Cognitive philology | Biology | 464 |
77,688,443 | https://en.wikipedia.org/wiki/Monitoring%20of%20geological%20carbon%20dioxide%20storage | Carbon dioxide (CO2) from carbon capture and storage and direct air capture operations is often injected into deep geologic formations. These storage sites can be monitored for CO2 leakage. Monitoring can be done at both the surface and subsurface levels. The dominant monitoring technique is seismic imaging, where vibrations are generated that propagate through the subsurface. The geologic structure can be imaged from the refracted/reflected waves.
Subsurface
Subsurface monitoring can directly and/or indirectly track the reservoir's status. One direct method involves drilling deep enough to collect a sample. This drilling can be expensive due to the rock's physical properties. It also provides data only at a specific location.
One indirect method sends sound or electromagnetic waves into the reservoir which reflects back for interpretation. This approach provides data over a much larger region; although with less precision.
Both direct and indirect monitoring can be done intermittently or continuously.
Seismic
Seismic monitoring is a type of indirect monitoring.
Examples of seismic monitoring of geological sequestration are the Sleipner sequestration project, the Frio CO2 injection test and the CO2CRC Otway Project. Seismic monitoring can confirm the presence of CO2 in a given region and map its lateral distribution, but is not sensitive to the concentration.
Tracer
Organic chemical tracers, using no radioactive or Cadmium components, can be used during the injection phase in a CCS project where CO2 is injected into an existing oil or gas field, either for EOR, pressure support or storage. Tracers and methodologies are compatible with CO2 – and at the same time unique and distinguishable from the CO2 itself or other molecules present in the sub-surface. Using laboratory methodology with an extreme detectability for tracer, regular samples at the producing wells will detect if injected CO2 has migrated from the injection point to the producing well. Therefore, a small tracer amount is sufficient to monitor large scale subsurface flow patterns. For this reason, tracer methodology is well-suited to monitor the state and possible movements of CO2 in CCS projects. Tracers can therefore be an aid in CCS projects by acting as an assurance that CO2 is contained in the desired location sub-surface. In the past, this technology has been used to monitor and study movements in CCS projects in Algeria, the Netherlands and Norway (Snøhvit).
Surface
This provides a measure of the vertical CO2 flux. Eddy covariance towers could potentially detect leaks, after accounting for the natural carbon cycle, such as photosynthesis and plant respiration. An example of eddy covariance techniques is the Shallow Release test. Another similar approach is to use accumulation chambers for spot monitoring. These chambers are sealed to the ground with an inlet and outlet flow stream connected to a gas analyzer. They also measure vertical flux. Monitoring a large site would require a network of chambers.
InSAR
Interferometric synthetic aperture radar (InSAR), is a radar technique used in geodesy and remote sensing.
References
Carbon capture and storage
Environmental monitoring | Monitoring of geological carbon dioxide storage | Engineering | 630 |
4,692,838 | https://en.wikipedia.org/wiki/Theorem%20Proving%20System | The Theorem Proving System (TPS) is an automated theorem proving system for first-order and higher-order logic. TPS has been developed at Carnegie Mellon University. An educational version of it is known as ETPS (Educational Theorem Proving System).
External links
Theorem Proving System web page
Theorem proving software systems
Common Lisp (programming language) software | Theorem Proving System | Mathematics | 72 |
13,411,552 | https://en.wikipedia.org/wiki/Problem%20frames%20approach | Problem analysis or the problem frames approach is an approach to software requirements analysis. It was developed by British software consultant Michael A. Jackson in the 1990s.
History
The problem frames approach was first sketched by Jackson in his book Software Requirements & Specifications (1995) and in a number of articles in various journals devoted to software engineering. It has received its fullest description in his Problem Frames: Analysing and Structuring Software Development Problems (2001).
A session on problem frames was part of the 9th International Workshop on Requirements Engineering: Foundation for Software Quality (REFSQ)] held in Klagenfurt/Velden, Austria in 2003. The First International Workshop on Applications and Advances in Problem Frames was held as part of ICSE’04 held in Edinburgh, Scotland. One outcome of that workshop was a 2005 special issue on problem frames in the International Journal of Information and Software Technology.
The Second International Workshop on Applications and Advances in Problem Frames was held as part of ICSE 2006 in Shanghai, China. The Third International Workshop on Applications and Advances in Problem Frames (IWAAPF) was held as part of ICSE 2008 in Leipzig, Germany. In 2010, the IWAAPF workshops were replaced by the International Workshop on Applications and Advances of Problem-Orientation (IWAAPO). IWAAPO broadens the focus of the workshops to include alternative and complementary approaches to software development that share an emphasis on problem analysis. IWAAPO-2010 was held as part of ICSE 2010 in Cape Town, South Africa.
Today research on the problem frames approach is being conducted at a number of universities, most notably at the Open University in the United Kingdom as part of its Relating Problem & Solution Structures research theme
The ideas in the problem frames approach have been generalized into the concepts of problem-oriented development (POD) and problem-oriented engineering (POE), of which problem-oriented software engineering (POSE) is a particular sub-category. The first International Workshop on Problem-Oriented Development was held in June 2009.
Overview
Fundamental philosophy
Problem analysis or the problem frames approach is an approach — a set of concepts — to be used when gathering requirements and creating specifications for computer software. Its basic philosophy is strikingly different from other software requirements methods in insisting that:
The best way to approach requirements analysis is through a process of parallel — not hierarchical — decomposition of user requirements.
User requirements are about relationships in the real world—the application domain – not about the software system or even the interface with the software system.
The approach uses three sets of conceptual tools.
Tools for describing specific problems
Concepts used for describing specific problems include:
phenomena (of various kinds, including events),
problem context,
problem domain,
solution domain (aka the machine),
shared phenomena (which exist in domain interfaces),
domain requirements (which exist in the problem domains) and
specifications (which exist at the problem domain:machine interface).
The graphical tools for describing problems are the context diagram and the
problem diagram.
Tools for describing classes of problems (problem frames)
The Problem Frames Approach includes concepts for describing classes of problems. A recognized class of problems is called a problem frame (roughly analogous to a design pattern).
In a problem frame, domains are given general names and described in terms of their important characteristics. A domain, for example, may be classified as causal (reacts in a deterministic, predictable way to events) or biddable (can be bid, or asked, to respond to events, but cannot be expected always to react to events in any predictable, deterministic way). (A biddable domain usually consists of people.)
The graphical tool for representing a problem frame is a frame diagram. A frame diagram looks generally like a problem diagram except for a few minor differences—domains have general, rather than specific, names; and rectangles representing domains are annotated to indicate the type (causal or biddable) of the domain.
A list of recognized classes of problems (problem frames)
The first group of problem frames identified by Jackson included:
required behavior
commanded behavior
information display
simple workpieces
transformation
Subsequently, other researchers have described or proposed additional problem frames.
Describing problems
The problem context
Problem analysis considers a software application to be a kind of software machine. A software development project aims to change the problem context by creating a software machine and adding it to the problem context, where it will bring about certain desired effects.
The particular portion of the problem context that is of interest in connection with a particular problem — the particular portion of the problem context that forms the context of the problem — is called the application domain.
After the software development project has been finished, and the software machine has been inserted into the problem context, the problem context will contain both the application domain and the machine. At that point, the situation will look like this:
The problem context contains the machine and the application domain. The machine interface is where the Machine and the application domain meet and interact.
The same situation can be shown in a different kind of diagram, a context diagram, this way:
The context diagram
The problem analyst's first task is to truly understand the problem. That means understanding the context in which the problem is set. And that means drawing a context diagram.
Here is Jackson's description of examining the problem context, in this case the context for a bridge to be built:
You're an engineer planning to build a bridge across a river. So you visit the site. Standing on one bank of the river, you look at the surrounding land, and at the river traffic. You feel how exposed the place is, and how hard the wind is blowing and how fast the river is running. You look at the bank and wonder what faults a geological survey will show up in the rocky terrain. You picture to yourself the bridge that you are going to build. (Software Requirements & Specifications: "The Problem Context")
An analyst trying to understand a software development problem must go through the same process as the bridge engineer. He starts by examining the various problem domains in the application domain. These domains form the context into which the planned Machine must fit. Then he imagines how the Machine will fit into this context. And then he constructs a context diagram showing his vision of the problem context with the Machine installed in it.
The context diagram shows the various problem domains in the application domain, their connections, and the Machine and its connections to (some of) the problem domains. Here is what a context diagram looks like.
This diagram shows:
the machine to be built. The dark border helps to identify the box that represents the Machine.
the problem domains that are relevant to the problem.
the solid lines represent domain interfaces — areas where domains overlap and share phenomena in common.
A domain is simply a part of the world that we are interested in. It consists of phenomena — individuals, events, states of affairs, relationships, and behaviors.
A domain interface is an area where domains connect and communicate. Domain interfaces are not data flows or messages. An interface is a place where domains partially overlap, so that the phenomena in the interface are shared phenomena — they exist in both of the overlapping domains.
You can imagine domains as being like primitive one-celled organisms (like amoebas). They are able to extend parts of themselves into pseudopods. Imagine that two such organisms extend pseudopods toward each other in a sort of handshake, and that the cellular material in the area where they are shaking hands is mixing, so that it belongs to both of them. That's an interface.
In the following diagram, X is the interface between domains A and B. Individuals that exist or events that occur in X, exist or occur in both A and B.
Shared individuals, states and events may look differently to the domains that share them. Consider for example an interface between a computer and a keyboard. When the keyboard domain sees an event Keyboard operator presses the spacebar the computer will see the same event as Byte hex("20") appears in the input buffer.
Problem diagrams
The problem analyst's basic tool for describing a problem is a problem diagram. Here is a generic problem diagram.
In addition to the kinds of things shown on a context diagram, a problem diagram shows:
a dotted oval representing the requirement to bring about certain effects in the problem domains.
dotted lines representing requirement references — references in the requirement to phenomena in the problem domains.
An interface that connects a problem domain to the machine is called a specification interface and the phenomena in the specification interface are called specification phenomena. The goal of the requirements analyst is to develop a specification for the behavior that the Machine must exhibit at the Machine interface in order to satisfy the requirement.
Here is an example of a real, if simple, problem diagram.
This problem might be part of a computer system in a hospital. In the hospital, patients are connected to sensors that can detect and measure their temperature and blood pressure. The requirement is to construct a Machine that can display information about patient conditions on a panel in the nurses station.
The name of the requirement is "Display ~ Patient Condition". The tilde (~) indicates that the requirement is about a relationship or correspondence between the panel display and patient conditions. The arrowhead indicates that the requirement reference connected to the Panel Display domain is also a requirement constraint. That means that the requirement contains some kind of stipulation that the Panel display must meet. In short, the requirement is that The panel display must display information that matches and accurately reports the condition of the patients.
Describing classes of problems
Problem frames
A problem frame is a description of a recognizable class of problems, where the class of problems has a known solution. In a sense, problem frames are problem patterns.
Each problem frame has its own frame diagram. A frame diagram looks essentially like a problem diagram, but instead of showing specific domains and requirements, it shows types of domains and types of requirements; domains have general, rather than specific, names; and rectangles representing domains are annotated to indicate the type (causal or biddable) of the domain.
Variant frames
In Problem Frames Jackson discussed variants of the five basic problem
frames that he had identified. A variant typically adds a domain to the problem context.
a description variant introduces a description lexical domain
an operator variant introduces an operator
a connection variant introduces a connection domain between the machine and the central domain with which it interfaces
a control variant introduces no new domain; it changes the control characteristics of interface phenomena
Problem concerns
Jackson also discusses certain kinds of concerns that arise when working with problem frames.
Particular concerns
overrun
initialization
reliability
identities
completeness
Composition concerns
commensurable descriptions
consistency
precedence
interference
synchronization
Recognized problem frames
The first problem frames identified by Jackson included:
required behavior
commanded behavior
information display
simple workpieces
transformation
Subsequently, other researchers have described or proposed additional problem frames.
Required-behavior problem frame
The intuitive idea behind this problem frame is:
There is some part of the physical world whose behavior is to be controlled so that it satisfies certain conditions. The problem is to build a machine that will impose that control.
Commanded-behavior problem frame
The intuitive idea behind this problem frame is:
There is some part of the physical world whose behavior is to be controlled in accordance with commands issued by an operator. The problem is to build a machine that will accept the operator's commands and impose the control accordingly.
Information display problem frame
The intuitive idea behind this problem frame is:
There is some part of the physical world about whose states and behavior certain information is continually needed. The problem is to build a machine that will obtain this information from the world and present it at the required place in the required form.
Simple workpieces problem frame
The intuitive idea behind this problem frame is:
A tool is needed to allow a user to create and edit a certain class of computer-processible text or graphic objects, or similar structures, so that they can be subsequently copied, printed, analyzed or used in other ways. The problem is to build a machine that can act as this tool.
Transformation problem frame
The intuitive idea behind this problem frame is:
There are some given computer-readable input files whose data must be transformed to give certain required output files. The output data must be in a particular format, and it must be derived from the input data according to certain rules. The problem is to build a machine that will produce the required outputs from the inputs.
Problem analysis and the software development process
When problem analysis is incorporated into the software development process, the software development lifecycle starts with the problem analyst, who studies the situation and:
creates a context diagram
gathers a list of requirements and adds a requirements oval to the context diagram, creating a grand "all-in-one" problem diagram. (However, in many cases actually creating an all-in-one problem diagram may be impractical or unhelpful: there will be too many requirements references criss-crossing the diagram to make it very useful.)
decomposes the all-in-one problem and problem diagram into simpler problems and simpler problem diagrams. These problems are projections, not subsets, of the all-in-one diagram.
continues to decompose problems until each problem is simple enough that it can be seen to be an instance of a recognized problem frame. Each subproblem description includes a description of the specification interfaces for the machine to be built.
At this point, problem analysis — problem decomposition — is complete. The next step is to reverse the process and to build the desired software system though a process of solution composition.
The solution composition process is not yet well understood, and is still very much a research topic. Extrapolating from hints in Software Requirements & Specifications, we can guess that the software development process would continue with the developers, who would:
compose the multiple subproblem machine specifications into the specification for a single all-in-one machine: a specification for a software machine that satisfies all of the customer's requirements. This is a non-trivial activity — the composition process may very well raise composition problems that need to be solved.
implement the all-in-one machine by going through the traditional code/test/deploy process.
Similar approaches
There are a few other software development ideas that are similar in some ways to problem analysis.
The notion of a design pattern is similar to Jackson's notion of a problem frame. It differs in that a design pattern is used for recognizing and handling design issues (often design issues in specific object-oriented programming languages such as C++ or Java) rather than for recognizing and handling requirements issues. Furthermore, one difference is that design patterns cover solutions while in problem frames problems are represented. However, the design patterns also tend to account for semantic outcomes that are not native to the programming language they are to be implemented in. So, another difference is that problem frames is a native meta-notation for the domain of problems, whereas design patterns are a catalogue of technical debt left behind by the language implementers.
Aspect-oriented programming, AOP (also known as aspect-oriented software development, AOSD) is similarly interested in parallel decomposition, which addresses what AOP proponents call cross-cutting concerns or aspects. AOP addresses concerns that are much closer to the design and code-generation phase than to the requirements analysis phase.
AOP has moved into requirement engineering notations such as ITU-T Z.151 User Requirements Notation (URN). In URN the AOP is over all the intentional elements. AOP can also be applied over requirement modelling that uses problem frames as a heuristic. URN models driven with problem frame thinking, and interleaved with aspects, allows for inclusion of architectural tactics into the requirement model.
Martin Fowler's book Analysis Patterns is very similar to problem analysis in its search for patterns. It doesn't really present a new requirements analysis method, however. And the notion of parallel decomposition — which is so important for problem analysis — is not a part of Fowler's analysis patterns.
Jon G. Hall, Lucia Rapanotti, together with Jackson, have developed the Problem Oriented Software Engineering (POSE) framework which shares the problem frames foundations. Since, 2005, Hall and Rapanotti have extended POSE to Problem Oriented Engineering (POE), which provides a framework for engineering design, including a development process model and assurance-driven design, and may be scalable to projects that include many stake-holders and that combine diverse engineering disciplines such as software and education provision.
References
External links
http://mcs.open.ac.uk/mj665/ is Michael A. Jackson's home page
http://www.jacksonworkbench.co.uk/stevefergspages/pfa/index.html has papers and articles on the Problem Frames Approach
Systems analysis
Systems engineering
Software requirements | Problem frames approach | Engineering | 3,419 |
43,943,904 | https://en.wikipedia.org/wiki/Joseph%20Kennedy%20%28professor%29 | Joseph P. Kennedy (18 May 1928 - 21 July 2024) was a Distinguished Professor of Polymer Science and Chemistry at the University of Akron, noted particularly for inventing a polymer coating for a drug-tipped stent that is highly compatible to human tissue, and that was successfully commercialized by Boston Scientific and credited for saving the lives of 6 million patients. He made important contributions to the field of carbocationic polymerization.
Personal
Kennedy spent his youth in Budapest, Hungary during World War II and the beginning of the Cold War. His father was killed by the Nazis, and his mother was imprisoned by communists. In 1948, he was kicked out of the college where he earned his first degree in chemistry, "for being too bourgeois".
At age 19, he fled to Austria as an illegal immigrant. He gained citizenship upon earning his doctorate in biochemistry from the University of Vienna, and he then completed postgraduate work at the Sorbonne in France.
In 1954, he immigrated to be close to family in Canada, and to take another postdoc position in Montreal. There he met Ingrid, who later became his wife.
Following many years of success in his field, Kennedy accepted an Honorary Doctorate from Kossuth University in 1989. He was also elected a member of the Hungarian Academy of Sciences in 1993.
Career
Kennedy's first employment in America was in 1957 with the chemical company Celanese in Summit, N.J. He later joined Exxon, where he apprenticed under Robert M. Thomas, and held a series of positions with increasing responsibility.
His interest in pure science eventually led him to seek a position in academia. In 1970, he accepted a position with the University of Akron, where he helped to develop the College of Polymer Science and Polymer Engineering.
Awards
Döbereiner Medaille, F. Schiller Universität, Jena, DDR, 1985
Honorary Doctorate (Doctor Honoris Causa, D.H.C.), Kossuth University, Debrecen, Hungary, 1989
Elected External Member of the Hungarian Academy of Sciences, 1993
George S. Whitby Award for Excellence in Teaching and Research, Rubber Division, Am. Chem. Soc., 1996
Award for Distinguished Service to Polymer Science, Society of Polymer Science, Japan, 2000
Charles Goodyear Medal, Rubber Division, American Chemical Society, 2008
Honorary Doctorate (D.H.C.), The University of Akron, 2008
Elected Fellow of American Institute of Medical and Biological Engineering (AIMBE), 2010
Heart Champion Award, American Heart Association, 2011
Ohio Patent Legacy Award, The Ohio Academy of Science, 2011
References
Polymer scientists and engineers
University of Akron faculty
Fellows of the American Institute for Medical and Biological Engineering
Year of birth missing (living people)
Jewish scientists | Joseph Kennedy (professor) | Chemistry,Materials_science | 560 |
34,880,278 | https://en.wikipedia.org/wiki/Separation%20process | A separation process is a method that converts a mixture or a solution of chemical substances into two or more distinct product mixtures, a scientific process of separating two or more substances in order to obtain purity. At least one product mixture from the separation is enriched in one or more of the source mixture's constituents. In some cases, a separation may fully divide the mixture into pure constituents. Separations exploit differences in chemical properties or physical properties (such as size, shape, charge, mass, density, or chemical affinity) between the constituents of a mixture.
Processes are often classified according to the particular properties they exploit to achieve separation. If no single difference can be used to accomplish the desired separation, multiple operations can often be combined to achieve the desired end.
With a few exceptions, elements or compounds exist in nature in an impure state. Often these raw materials must go through a separation before they can be put to productive use, making separation techniques essential for the modern industrial economy.
The purpose of separation may be:
analytical: to identify the size of each fraction of a mixture is attributable to each component without attempting to harvest the fractions.
preparative: to "prepare" fractions for input into processes that benefit when components are separated.
Separations may be performed on a small scale, as in a laboratory for analytical purposes, or on a large scale, as in a chemical plant.
Complete and incomplete separation
Some types of separation require complete purification of a certain component. An example is the production of aluminum metal from bauxite ore through electrolysis refining. In contrast, an incomplete separation process may specify an output to consist of a mixture instead of a single pure component. A good example of an incomplete separation technique is oil refining. Crude oil occurs naturally as a mixture of various hydrocarbons and impurities. The refining process splits this mixture into other, more valuable mixtures such as natural gas, gasoline and chemical feedstocks, none of which are pure substances, but each of which must be separated from the raw crude.
In both complete separation and incomplete separation, a series or cascade of separations may be necessary to obtain the desired end products. In the case of oil refining, crude is subjected to a long series of individual distillation steps, each of which produces a different product or intermediate.
List of separation techniques
Centrifugation and cyclonic separation, separates based on density differences
Chelation
Chromatography separates dissolved substances by different interaction with (i.e., travel through) a material.
High-performance liquid chromatography (HPLC)
Thin-layer chromatography (TLC)
Countercurrent chromatography (CCC)
Droplet countercurrent chromatography (DCC)
Paper chromatography
Ion chromatography
Size-exclusion chromatography (SEC)
Affinity chromatography
Centrifugal partition chromatography
Gas chromatography and Inverse gas chromatography
Crystallization
Decantation
Demister (vapor), removes liquid droplets from gas streams
Distillation, used for mixtures of liquids with different boiling points
Drying, removes liquid from a solid by vaporization or evaporation
Electrophoresis, separates organic molecules based on their different interaction with a gel under an electric potential (i.e., different travel)
Capillary electrophoresis
Electrostatic separation, works on the principle of corona discharge, where two plates are placed close together and high voltage is applied. This high voltage is used to separate the ionized particles.
Elutriation
Evaporation
Extraction
Leaching
Liquid–liquid extraction
Solid phase extraction
Supercritical fluid extraction
Subcritical fluid extraction
Field flow fractionation
Filtration – Mesh, bag and paper filters are used to remove large particulates suspended in fluids (e.g., fly ash) while membrane processes including microfiltration, ultrafiltration, nanofiltration, reverse osmosis, dialysis (biochemistry) utilising synthetic membranes, separates micrometre-sized or smaller species
Flocculation, separates a solid from a liquid in a colloid, by use of a flocculant, which promotes the solid clumping into flocs
Fractional distillation
Fractional freezing
Magnetic separation
Oil-water separation, gravimetrically separates suspended oil droplets from waste water in oil refineries, petrochemical and chemical plants, natural gas processing plants and similar industries
Precipitation
Recrystallization
Scrubbing, separation of particulates (solids) or gases from a gas stream using liquid.
Sedimentation, separates using vocal density pressure differences
Gravity separation
Sieving
Sponge, adhesion of atoms, ions or molecules of gas, liquid, or dissolved solids to a surface
Stripping
Sublimation
Vapor–liquid separation, separates by gravity, based on the Souders–Brown equation
Winnowing
Zone refining
See also
Chemical process – A method or means of somehow changing one or more chemicals or chemical compounds.
References
Further reading
External links
Separation of Mixtures Using Different Techniques , instructions for performing classroom experiments
Separation of Components of a Mixture , instructions for performing classroom experiments
Analytical chemistry
Unit operations
Separation processes | Separation process | Chemistry | 1,048 |
247,167 | https://en.wikipedia.org/wiki/Pendant | A pendant is a loose-hanging piece of jewellery, generally attached by a small loop to a necklace, which may be known as a "pendant necklace". A pendant earring is an earring with a piece hanging down. Its name stems from the Latin word pendere and Old French word pendr, both of which translate to "to hang down". In modern French, pendant is the gerund form of pendre ("to hang") and also means "during". The extent to which the design of a pendant can be incorporated into an overall necklace makes it not always accurate to treat them as separate items.
In some cases, though, the separation between necklace and pendant is far clearer.
Overview
Pendants are among the oldest recorded types of bodily adornment. Stone, shell, pottery, and more perishable materials were used. Ancient Egyptians commonly wore pendants, some shaped like hieroglyphs.
Pendants can have several functions, which may be combined:
Award (i.e., Scouting Ireland Chief Scout's Award, Order of CúChulainn)
Identification (i.e., religious symbols, sexual symbols, symbols of rock bands)
Ornamentation
Ostentation (i.e., jewels).
Protection (i.e., amulets, religious symbols)
Self-affirmation (i.e., initials, names)
The many specialized types of pendants include lockets which open, often to reveal an image, and pendilia, which hang from larger objects of metalwork.
Types
Throughout the ages, pendants have come in a variety of forms to serve a variety of purposes.
Amulet
Though amulets come in many forms, a wearable amulet worn around the neck or on the arm or leg in the form of a pendant is the most common. These are objects believed to possess magical or spiritual power to protect the wearer from danger or dispel evil influences.
Talisman
Similar to an amulet, a talisman is an object believed to possess supernatural traits. However, while an amulet is strictly a defensive object, a talisman is meant to confer special benefits or powers upon the wearer.
Locket
A locket is a small object that opens to reveal a space which serves to hold a small object, usually a photograph or a curl of hair. They typically come in the form of a pendant hanging from a necklace, though they will occasionally be hung from a charm bracelet.
Medallion
A medallion is most often a coin-shaped piece of metal worn as a pendant around the neck or pinned onto clothing. These are generally granted as awards, recognitions, or religious blessings.
Painting
Pendant is the name given to one of two paintings conceived as a pair. They usually are gift from couples and some cultures consider the act of giving one a marry proposition.
Functional pendants
Tools worn as pendants include Maori pounamu pendants. Shepherd's whistles, bosun's whistles, and ocarinas can also be made as pendants. Portable astronomical and navigational instruments were made as pendants.
In the first decade of the 21st century, jewellers started to incorporate USB flash drives into pendants.
Fashion pendants
Fashion pendants include a small creative piece often made from precious or non-precious stones and metals like diamonds or pearls hanging freely from a chain or necklace. These are generally worn as a statement piece or a fashion ornament.
Other types
Harness pendant
See also
Boule de Genève
Petit chien à bélière
Yupei - Chinese jade pendant
References
Jewellery components
Necklaces
Types of jewellery | Pendant | Technology | 732 |
246,471 | https://en.wikipedia.org/wiki/Crab-eating%20macaque | The crab-eating macaque (Macaca fascicularis), also known as the long-tailed macaque or cynomolgus macaque, is a cercopithecine primate native to Southeast Asia. As a synanthropic species, the crab-eating macaque thrives near human settlements and in secondary forest. Crab-eating macaques have developed attributes and roles assigned to them by humans, ranging from cultural perceptions as being smart and adaptive, to being sacred animals, being regarded as vermin and pests, and becoming resources in modern biomedical research. They have been described as a species on the edge, living on the edge of forests, rivers, and seas, at the edge of human settlements, and perhaps on the edge of rapid extinction.
Crab-eating macaques are omnivorous and frugivorous. They live in matrilineal groups ranging from 10 to 85 individuals, with groups exhibiting female philopatry and males emigrating from natal group at puberty. Crab-eating macaques are the only old-world monkey known to use stone tools in their daily foraging, and they engage in a robbing and bartering behavior in some tourist locations.
The crab-eating macaque is the most traded primate species, the most culled primate species, the most persecuted primate species and also the most popular species used in scientific research. Due to these threats, the crab-eating macaque was listed as Endangered on the IUCN Red List in 2022.
Etymology
Macaca comes from the Portuguese word macaco, which was derived from makaku, a word in Ibinda, a language of Central Africa (kaku means monkey in Ibinda). The specific epithet fascicularis is Latin for a small band or stripe. Sir Thomas Raffles, who gave the animal its scientific name in 1821, did not specify what he meant by the use of this word.
In Indonesia and Malaysia, the crab-eating macaque and other macaque species are known generically as kera.
The crab-eating macaque has several common names. It is often referred to as the long-tailed macaque due to its tail, which is the length of their body and head combined. The name crab-eating macaque refers to it to it being seen foraging beaches for crabs. Another common name for M. fascicularis, often used in laboratory settings, is the cynomolgus monkey which derives from Greek Kynamolgoi meaning "dog milkers". It has also been suggested that cynomolgus refers to a race of humans with long hair and handsome beards who used dogs for hunting according to Aristophanes of Byzantium, who seemingly derived the etymology of the word cynomolgus from the Greek κύων, cyon 'dog' (gen. cyno-s) and the verb , 'to milk' (adj. amolg-os), by claiming that they milked female dogs.
Perceptions and terminology
Crab-eating macaques are understood and perceived in many ways: smart, pestiferous, exploited, sacred, vermin, invasive.
In 2000 the crab-eating macaque was placed on the list of 100 most invasive species. For example, they are considered an invasive alien species (IAS) on Mauritius, articles argue for long-tailed macaques spreading seeds of invasive plants, competing with native species like the Mauritian flying fox, and having a detrimental impact on native threatened species. Several authors pointed out that the present evidence indicates that predation on birds by monkeys may have been overestimated. address these accusations and they point out the crab-eating macaques do not prefer primary forest thus it is unlikely that Mauritius macaques were ever a major source of indigenous forest destruction. The primary driver of bird extinction has been habitat destruction by humans. Sussman and Tattersall mention that the Dutch abandoned the island in 1710–12 due to monkeys and rats destroying plantations, they point out that the human population was low at this time and the crab-eating macaques would have had plenty of primary forest to exploit, yet they chose to brave the dangers of raiding plantations. They do not deny that macaques on Mauritius prey on bird eggs and disseminate seeds of exotic plants yet the major loss of species on Mauritius is due to habitat loss caused by humans – macaques are successful because they prefer secondary forest and disturbed habitats. This is significant because the perception of crab-eating macaques being invasive and destructive to "native" biodiversity are used as a justification for use in biomedical research. It is important to be aware of perceptions, and how we categorize other beings because, for example, the label of "pest" or "invasive" provides justification and moral comfort about killing those that don't "belong" – these lives are viewed as not legitimate, killable, bare life lacking grievability.
"Weed" and "non-weed" species are distinguished based on that species ability to thrive in close proximity and association with human settlements. This label was not intentionally proposed to disparage crab-eating macaques but this term, like pest and invasive, can affect how people perceive this species and can trigger negatives perceptions.
Taxonomy
Previously ten subspecies of Macaca fascicularis, but the Philippine long-tailed macaque (M.f. philippinensis) is under dispute and is tentatively removed from IUCN Red List assessments, with those individuals included with M.f. fascicularis.
M.f. fascicularis, common long-tailed macaque – Indonesia, Malaysia, Philippines, Thailand, Cambodia, Singapore, Vietnam
M.f. aurea, Burmese long-tailed macaque – Myanmar, Laos, western and southern Thailand near Myanmar border
M.f. antriceps, Dark crowned long-tailed macaque – Kram Yai Island, Thailand
M.f. condorensis, Con Song long-tailed macaque – Con Son Island, Hon Ba Island, Vietnam
M.f. karimondjiwae, Karimunjawa long-tailed macaque – Karimunjawa Islands, Indonesia
M.f. umbrosa, Nicobar long-tailed macaque – Nicobar islands, India
M.f. fusca, Simeulue long-tailed macaque – Simeulue Island, Indonesia
M.f. lasiae, Lasia long-tailed macaque – Lasia island, Indonesia
M.f. tua, Maratua long-tailed macaque – Maratua Island, Indonesia
M.f. fascicularis has the largest range, followed by M.f. aurea. The other seven subspecies are isolated on small islands: M.f. antriceps, M.f. condorensis, and M.f. karimondjiwae all populate small shallow-water fringe-islands; M.f. umbrosa, M.f. fusca, M.f. lasiae, and M.f. tua all inhabit deep-water fringing-islands.
Evolution
The macaque originated in northeastern Africa some 7 million years ago and spread through most of continental Asia by , and subdivided into four groups (sylvanus, sinica, silenus, and fascicularis). The earliest split in the genus Macaca likely occurred ~4.5 mya between an ancestor of the silenus group and a fascicularis-like ancestor from which non-silenus species later evolved. The species of the fascicularis group (which include m. fascicularis, m. mulatta, and m. fuscata) share a common ancestor that lived 2.5 mya. It is suggested that M. fascicularis are the most plesiomorphic (ancestral) taxon in the fascicularis clade, thus it is argued that M. mulatta evolved from a fascicularis-like ancestor that reached mainland from its homeland in Indonesia around 1mya.
A phylogenetic analysis found evidence that the fascicularis group originated from an ancient hybridization between the sinica and silenus groups ~3.45–3.56 mya, soon after the initial separation of two parent lineages (proto-sinica and proto-silenus) ~3.86 mya. This divergence and subsequent hybridization occurred during rapid glacial-eustatic fluctuations in the early Pleistocene: high sea levels may have led to the initial separation of proto-sinica and proto-silenus while the subsequent lowering of sea levels facilitated the secondary contact needed for hybridization.
Known fossils indicate that crab-eating macaques inhabited the Sunda Shelf since at least early Pleistocene, ~1mya. It is likely that crab-eating macaques were introduced to Timor and Flores (both on the east side of the Wallace line), by humans around 4,000–5,000 years ago. Crab-eating macaques are the only species on both sides of the Wallace line.
The possible stages of crab-eating macaque evolution and dispersal were proposed:
Stage 1: more than , crab-eating dispersed into the Sunda Shelf area. Earliest fossil record of crab-eating macaques was found in Java (this collection included H. erectus and leaf monkey species). They probably reached Java by dry land during a period of glacial advance and low sea levels
Stage 2: around 160 thousand years ago, dispersal and isolation of progenitors of the strongly differentiated deep water fringing island populations occurred. These include M.f. umbrosa, M.f. fusca, M.f., tua [fooden includes M.f. philippinensis but their subspecies status is currently under debate]. It is thought that the progenitors of these subspecies reached deep water habitats during the penultimate glacial maximum when sea levels were lower than present. These populations became isolated during the interglacial period around 120 kya
Stage 3: more than 18 thousand years ago, the differentiation of progenitors of populations of the Indochinese peninsula and northern part of the isthmus of Kra occurred. These subspecies include M.f. aurea and M.f. fascicularis. These two subspecies became differentiated before the last glacial maximum
Stage 4: 18 thousand years ago, the dispersal and isolation of progenitors of weakly differentiated deep water fringing island populations occurred (M. f. fascicularis)
Stage 5: less than 18 thousand years ago, the isolation of the progenitors of shallow water fringing island populations and populations in Penida and Lombok (deep water) occurred. These subspecies include M.f. karimondjawae, M.f. atriceps, M.f. condorensis, M.f. fascicularis
Stage 6: 4.5 thousand years ago, the dispersal and isolation of progenitors of populations in easter lesser Sunda islands (deep water), occurred (M.f. fascicularis).
Characteristics
Crab-eating macaques are sexually dimorphic, males weigh between 4.7 and 8.3 kg and females weigh 2.5–5.7 kg. The height of an adult male is between 412-648mm and 385-505mm for adult females. Their tails are the length of their head and body combined. Dorsal pelage is generally greyish or brownish with a white underbelly with black and white highlights around the crown and face. The face skin is brownish to pinkish except for the eyelids which are white. Adults are usually bearded on and around the face, except for around the snout and eyes. Older females have the fullest beards, with males' being more whisker like. Subspecies on islands seem to have black coloration of their pelage and large island, and mainland subspecies are lighter.
Genetics
Hybridity
Along the northern part of range crab-eating macaques hybridize with rhesus macaques (M. mulatta). They also have been known to hybridize with southern pig-tailed macaques (M. nemestrina). Hybrids also occur across subspecies too. Rhesus and crab-eating macaques hybridize within a contact zone where their ranges overlap, which has been proposed to lie between 15 and 20 degrees north and includes Thailand, Myanmar, Laos, Vietnam. Their offspring are fertile, and they continue to mate which leads to a broad range of admixture proportions. Introgression from rhesus to crab-eating macaque populations extends beyond Indochina and the Kra Isthmus, whereas introgression from crab-eating to rhesus macaques is more restricted. There seems to be a rhesus biased and male biased gene flow between rhesus and crab-eating macaque population which has led to different degrees of genetic admixture in these two species.
Distribution and habitat
The crab-eating macaque's native range encompasses most of mainland Southeast Asia, through the Malay Peninsula and Singapore, the Maritime Southeast Asia islands of Sumatra, Java, and Borneo, offshore islands, the islands of the Philippines, and the Nicobar Islands in the Bay of Bengal. This primate is a rare example of a terrestrial mammal that violates the Wallace line, being found out across the Lesser Sunda Islands. It lives in a wide variety of habitats, including primary lowland rainforests, disturbed and secondary rainforests, shrubland, and riverine and coastal forests of nipa palm and mangrove. It also easily adjusts to human settlements and is considered sacred at some Hindu temples and on some small islands, but as a pest around farms and villages. Typically, it prefers disturbed habitats and forest periphery.
Introduction to other regions
Humans have transported crab-eating macaques to at least five islands: Mauritius, West Papua, Ngeaur, Tinjil Island near Java, and Kabaena Island off of Sulawesi, and to Kowloon Hills of Hong Kong.
There was no indigenous human population on Mauritius. Early exploration of Mauritius by Phoenicians, Swahili people and Arab merchants has been suggested but it was not until the early 16th century that there is hard evidence of human presence on the island, with the Portuguese using it as a refreshing post. The Dutch reached the island in 1598 and attempted a permanent settlement from 1638 to 1658 when they abandoned the island, they resettled again from 1664 to 1710, but abandoned the island again due in part to monkeys and rats destroying plantations. Crab-eating macaques were brought to Mauritius either by the Portuguese or the Dutch in the late 1500s to early 1600s. This founder population likely came from Java, although a mixed origin has been suggested.
From the mid-1980s to mid-1990s the population of crab-eating macaques on Mauritius was estimated at 35,000 to 40,000. The present population is not known but estimates indicate it may be as low as 8,000. This significant decline in the population is likely correlated to the booming Macaque breeding industry on Mauritius. As crab-eating macaques are considered invasive and destructive this justifies their use in biomedical research. On Mauritius macaques are also perceived as sacred, source of tourism, pets, pest, and food.
Crab-eating macaques first appeared on Ngeaur Island, during German rule in the early 20th century. Population size has fluctuated between 800 and 400 individuals. The population losses due to eradication efforts, yet the population has survived despite typhoons and WWII bombing on the island.
In Kowloon Hills there are groups of differing species and their hybrids, where they were released during the 1910s. Rhesus macaques and crab-eating macaques interbred and hybridized. Tibetan macaques were also released but did not interbreed. This location has become a popular tourist attraction.
The immunovaccine porcine zona pellucida (PZP), which causes infertility in females, is currently being tested in Hong Kong to investigate its use as potential population control.
Crab-eating macaques have been in West Papua for around 30 to 100 years, but this population has not expanded, remaining at around 60 to 70 individuals.
There is little known of the population on Kabaena Island, Sulawesi. These crab-eating macaques appear to have distinct morphology, which may suggest that they have been on the island for a long period of time.
Between 1988 and 1994, a total of 520 crab-eating macaques including 58 males and 462 females were released on Tinjil Island for the purpose of starting a natural habitat breeding facility. This may be a sustainable way of supplying monkeys for research, but it is in a legal gray area for trading regulations, using captive bred codes (F, C) rather than wild-caught (W).
Population size
Because crab-eating macaques are synanthropic, enhancing their visibility to humans, this leads to an overestimation in their population size. Researchers have been raising alarms about crab-eating macaque population decline at least since 1986. Many authors cite a 40% decline in the entire crab-eating macaque population between 1980 and 2006. This comes from a population estimate of 5 million in the 1980s-90s. population estimate of 3 million in 2006. It is unclear how the 3 million estimate was reached.
Using a noninvasive probability model to estimate the maximum population abundance, it was estimated that the current population of crab-eating macaques is 1 million, which reflects a continuous decline in the population – 80% reduction over 35 years. This study used a model that overestimated population so the true decline is probably even greater.
A population Viability Analysis (PVA) for crab-eating macaques revealed that the presence and absence of females in a population are key to its short and long term viability. Anything that negatively targets females is likely to threaten population viability, e.g., harvesting for biomedical research targets females.
Behavior and ecology
The crab-eating macaque is highly adaptive, living near and benefiting from humans and environmental modifications.
Group size and structure
Crab-eating macaques live in matrilineal groups ranging from 10 to 85 members, but most often fall in the range of 35–50. Group size varies greatly, especially between non-provisioned and provisioned groups. Large groups live in secondary forest, savanna and thorn scrub vegetation, and urban habitats and temples. Smaller groups live in primary forest, swamp and mangrove forests. Groups will break into subgroups during the day throughout their range. Composition of groups is multi-male/multi-female but females outnumber males with the sex ratio varying between 1:5–6 and 1:2. Groups exhibit female philopatry with males emigrating from natal group at puberty. Males leave natal group as late juveniles or subadults before the age of seven. On average, adult females and juveniles in groups are related at the level of cousins, whereas adult males are unrelated. Higher relatedness in females is expected due to female philopatry.
Social organization
Macaque social groups have a clear dominance hierarchy among females, these ranks are stable over a female's lifetime and the matriline's rank may be sustained for generations. Matrilines creating interesting group dynamics, for example males are dominant to females at the individual level but groups of closely related females can have some level of dominance over males. The dominant male within a group is not often stable, and males probably change troops several times during their life; rank below the dominant male is not consistent or stable either – males show sophisticated decision-making when it comes to transferring dominance.
Intergroup encounters
Direct encounters between adjacent non-provisioned troops are relatively rare which suggests mutual avoidance.
Interspecific behavior
Interactions have been reported between crab-eating and southern pig-tailed macaques, Colobinae species, proboscis monkey, gibbons and orangutans. Dusky leaf monkeys, crab-eating macaques and white-thighed surilis form tolerant foraging associations, with juveniles playing together. Crab-eating macaques have also been observed grooming Raffles' banded langurs in Malaysia.
Conflict
Group living in all species is dependent on the tolerance of other group members. In crab-eating macaques, successful social group living requires postconflict resolution. Usually, less dominant individuals lose to a higher-ranking individual when conflict arises. After the conflict has taken place, lower-ranking individuals tend to fear the winner of the conflict to a greater degree. In one study, this was seen in the ability to drink water together. Postconflict observations showed a staggered time between when the dominant individual begins to drink and the subordinate. Long-term studies reveal the gap in drinking time closes as the conflict moves further into the past.
Grooming and support in conflict among primates is considered to be an act of reciprocal altruism. In crab-eating macaques, an experiment was performed in which individuals were given the opportunity to groom one another under three conditions: after being groomed by the other, after grooming the other, and without prior grooming. After grooming took place, the individual that received the grooming was much more likely to support their groomer than one that had not previously groomed that individual.
Crab-eating macaques demonstrate two of the three forms of suggested postconflict behavior. In both captive and wild studies, they demonstrated reconciliation, or an affiliative interaction between former opponents, and redirection, or acting aggressively towards a third individual. Consolation was not seen in any study performed.
When crab-eating macaques are approached by others while foraging, they tend to move away.
Postconflict anxiety has been reported in crab-eating macaques that have acted as the aggressor. After a conflict within a group, the aggressor appears to scratch itself at a higher rate than before the conflict. Though the scratching behavior cannot definitely be termed as an anxious behavior, evidence suggests this is the case. An aggressor's scratching decreases significantly after reconciliation. This suggests reconciliation rather than a property of the conflict is the cause of the reduction in scratching behavior. Though these results seem counterintuitive, the anxiety of the aggressor appears to have a basis in the risks of ruining cooperative relationships with the opponent.
Kin altruism and spite
In a study, a group of crab-eating macaques was given ownership of a food object. Adult females favored their own offspring by passively, yet preferentially, allowing them to feed on the objects they held. When juveniles were in possession of an object, mothers robbed them and acted aggressively at an increased rate towards their own offspring compared to other juveniles. These observations suggest close proximity influences behavior in ownership, as a mother's kin are closer to her on average. When given a nonfood object and two owners, one being a kin and one not, the rival will choose the older individual to attack regardless of kinship. Though the hypothesis remains that mother-juvenile relationships may facilitate social learning of ownership, the combined results clearly point to aggression towards the least-threatening individual.
A study was conducted in which food was given to 11 females. They were then given a choice to share the food with kin or nonkin. The kin altruism hypothesis suggests the mothers would preferentially give food to their own offspring. Yet eight of the 11 females did not discriminate between kin and nonkin. The remaining three did, in fact, give more food to their kin. The results suggest it was not kin selection, but instead spite that fueled feeding kin preferentially. This is due to the observation that food was given to kin for a significantly longer period of time than needed. The benefit to the mother is decreased due to less food availability for herself and the cost remains great for nonkin due to not receiving food. If these results are correct, crab-eating macaques are unique in the animal kingdom, as they appear not only to behave according to the kin selection theory, but also act spitefully toward one another.
Reproduction
After a gestation period of 162–193 days, the female gives birth to one infant. The infant's weight at birth is about . Infants are born with black fur which will begin to turn to a grey or reddish-brown shade (depending on the subspecies) after about three months of age. This natal coat may indicate to others the status of the infant, and other group members treat infants with care and rush to their defense when distressed. Immigrant males sometimes kill infants not their own in order to shorten interbirth intervals. High-ranking females will sometimes kidnap the infants of lower-ranking females. These kidnappings can result in the death of the infants, as the other female is usually not lactating. A young juvenile stays mainly with its mother and relatives. As male juveniles get older, they become more peripheral to the group. Here they play together, forming crucial bonds that may help them when they leave their natal group. Males that emigrate with a partner are more successful than those that leave alone. Young females, though, stay with the group and become incorporated into the matriline into which they were born.
Male crab-eating macaques groom females to increase the chance of mating. A female is more likely to engage in sexual activity with a male that has recently groomed her than with one that has not.
Studies have found that the dominant male copulates more than other males in the group. DNA tests indicate that dominant males sire most of the offspring in natural crab-eating macaque troops. Reproductive success in females is also linked to dominance. High ranking females have more offspring over their life-time than low-ranking females – higher ranking females reproduce at a younger age and their offspring have a higher chance of survival.
Diet
Crab-eating macaques are omnivorous frugivores and eat fruits, leaves, flowers, shoots, roots, invertebrates, and small animals in variable quantities. They eat durians, such as Durio graveolens and D. zibethinus, and are a major seed disperser for the latter species.
They exhibit particularly low tolerance for swallowing seeds, but spit seeds out if larger than . This decision to spit seeds is thought to be adaptive; it avoids filling the monkey's stomach with wasteful bulky seeds that cannot be used for energy.
Fruit makes up 40% to over 80% of the diet of wild crab-eating macaques, except in highly provisioned populations or highly disturbed environments.
Crab-eating macaques can become synanthrope, living off human resources when feeding in cropfields on young dry rice, cassava leaves, rubber fruit, taro plants, coconuts, mangos, and other crops, often causing significant losses to local farmers. In villages, towns, and cities, they frequently take food from garbage cans and refuse piles.
In Padangtegal Bali macaque 70% of their diet is provisioned.
They become unafraid of humans in these conditions, which can lead to macaques directly taking food from people, both passively and aggressively.
Tool use
Crab-eating macaques are the only old world monkey known to use stone tools in their daily foraging. This is mainly observed in populations along the ocean of Thailand and Myanmar (M.f. aurea subspecies). A 1887 report described observations to tool use in a Myanmar population. Over 100 years later the first published report is published in 2007. describing crab-eating macaques in Thailand using ax shaped stones to crack rock oysters, detached gastropods, bivalves, and swimming crabs. Also in Thailand, crab-eating macaques have been observed using tools to crack open oil palm nuts in abandoned plantations, the rapid uptake of oil palm nutcracking shows macaques ability to take advantage of anthropogenic changes and the recent establishment of this behavior indicates the potential for macaques to exhibit cultural tendencies. Unfortunately, human activities can negatively impact tool-using macaques, thus disrupting the persistence of these stone tool use traditions.
Another instance of tool use is washing and rubbing foods, such as sweet potatoes, cassava roots, and papaya leaves, before consumption. Crab-eating macaques either soak these foods in water or rub them through their hands as if to clean them. They also peel the sweet potatoes, using their incisors and canine teeth. Adolescents appear to acquire these behaviors by observational learning of older individuals.
Robbing and bartering
Robbing and bartering is a behavioral pattern in which free ranging nonhuman primates spontaneously steal an object from a human and then hold onto that object until that or another human solicits an exchange by offering food. This behavior is seen in crab-eating macaques at Uluwatu population in Bali, and is described as a population specific behavioral practice, prevalent and persistent across generations and characterized by marked intergroup variation. Synchronized expression of robbing and bartering was socially influenced and more specifically explained by response facilitation. This result further supports the cultural nature of robbing and bartering.
Token-robbing and token/reward-bartering are cognitively challenging tasks for the Uluwatu macaques that revealed unprecedented economic decision-making processes, i.e., value based token selection and payoff maximization. This spontaneous, population specific, prevalent, cross-generational, learned and socially influenced practice may be the first example of a culturally maintained token economy in free-ranging animals.
Threats
The crab-eating macaque has been categorized as Endangered on the IUCN Red List; it is threatened by habitat loss due to rapid land use changes in the landscapes of Southeast Asia and the surging demand by the medical industry during the COVID-19 pandemic. A 2008 review of population trends suggested a need for better monitoring of populations due to increased wild trade and rising levels of human-macaque conflict, which continue to decrease overall population levels despite the species' wide distribution.
Each subspecies faces differing levels of threats, and too little information is available on some subspecies to assess their conditions. M. f. umbrosa is likely of important biological significance and has been recommended as a candidate for protection in the Nicobar Islands, where its small, native population has been seriously fragmented. It is listed as vulnerable on the IUCN Red List. The Philippine long-tailed macaque (M. f. philippensis) is listed as near threatened, and M. f. condorensis is vulnerable. All other subspecies are listed as data deficient and need further study; although recent work is showing M. f. aurea and M. f. karimondjawae need increased protection.
Trade
The crab-eating macaque is one of the most widely traded species of mammal listed on the CITES appendices. The international trade in crab-eating macaques is a multibillion-dollar industry. Crab-eating macaques are sold for up to $20,000 to $24,000, and prices rise when supply reduces. International crab-eating macaque trade does not appear to follow a particular trend but continues to change over time. Although peak exports often correlate with declarations of public health emergences.
In the 1970s, India was the largest supplier of macaques, mostly rhesus macaques, but put a ban on export because when it became apparent that monkeys were used to test military weapons. After this ban, crab-eating macaques began to be used more in biomedical research. Imports of crab-eating macaques in the US and elsewhere began to increase during the worldwide reduction and subsequent ban of rhesus macaque exports from India.
In the 1980s, crab-eating macaques were introduced to China and began being bred in captive facilities. Since then, captive macaques have been favored in biomedical trade.
In the 1990s, four major commercial monkey farms operated by Chinese entrepreneurs began exporting wild caught macaques as captive bred, and monkeys smuggled from Laos and Cambodia were likely part of these transactions.
By 2001, China was exporting significantly more crab-eating macaques than rhesus macaques. Cambodia grants harvest permits to five monkey farms to breed crab-eating macaques for export. Crab-eating macaque harvesting began to accelerate as farms and holding areas were established near protected areas. At this time, international trade of crab-eating macaques expanded rapidly.
Between 2000 and 2018, the US was the largest importer of crab-eating macaques ranging from 41.7 to 70,1% of imports. other major importers: France up to 17.1%, Great Britain up to 15.9%, Japan up to 37.9%, and China up to 33.5%. During this time, China was the largest exporter of crab-eating macaques. Other exporters include Mauritius, Laos, Cambodia, Thailand, Indonesia, and Vietnam. Between 2008 and 2019, at least 450,000 live crab-eating macaques and over 700,000 specimens were traded, with mover 50,000 identified as wild caught.
After 2018, Cambodia became the largest exporter of crab-eating macaques, contributing 59% of all macaques traded in 2019 and 2020. Between 2019 and 2020, Chinese crab-eating macaque trade decreased 96%. China banned animal trade in January 2020 due to concerns of COVID-19, yet this cannot account for the significant decrease in crab-eating macaque exports in 2019, the drivers of this decline are still unclear.
Crab-eating macaques are one of the most commonly internationally traded mammals and are also the most common primates in domestic trade, most often for pets or food. Macaques are regularly sold and kept as pets in China, Vietnam, and Indonesia. In Indonesia pet macaques are usually taken from the wild, which was illegal since 2009, but in 2021 the Indonesian government lifted the harvest ban and reinstated a harvest quota. In Indonesia, crab-eating macaques and pig tailed macaques are the only primates that are not included in the list of protected species. Often infants and juveniles are caught and sold in wildlife markets.
Laundering ring
In November 2022, following a five-year investigation by the DoJ and US Fish and Wildlife, the DoJ indicted Cambodian government officials and Cambodian owner and staff of Vanny Bio Research Corporation LtD, a macaque breeding center in Cambodia, for their alleged involvement in laundering wild-caught monkeys as captive bred. Charles River Laboratories is also under investigation. Unfortunately, the crab-eating macaques involved in the Cambodian smuggling ring imported by Charles River are in limbo – they are ineligible for research but they cannot go back to the wild either. This laundering is a sophisticated trans-border wildlife trafficking network. Crab-eating macaques are harvested in places like Cambodia, Laos, and Myanmar and then laundered through Vietnam and illegally smuggled to places like China.
Conservation
The crab-eating macaque is listed on CITES Appendix II.
Its IUCN Red List status was uplisted in 2020 and again in 2022 from the Least Concern classification in 2008 as a result of declining population resulting from hunting and troublesome interactions with humans, despite its wide range and ability to adapt to different habitats. These interactions include the skyrocketing demand for crab-eating macaques by the medical industry during the COVID-19 pandemic, and the rapid development of the landscape in Southeast Asia. A 2008 review of their populations suggested a need for better monitoring of populations due to increased wild trade and rising levels of human-macaque conflict, which continue to decrease overall population levels despite the species' wide distribution.
The Long-Tailed Macaque Project and The Macaque Coalition are engaged in conservation of the crab-eating macaque through research and public engagement.
Relationship with humans
Crab-eating macaques extensively overlap with humans across their range in Southeast Asia. Consequently, they live together in many locations. Some of these areas are associated with religious sites and local customs, such as the monkey forests and temples of Bali in Indonesia, Thailand, and Cambodia, while other areas are characterized by conflict as a result of habitat loss and competition over food and space. Humans and crab-eating macaques have shared environments since prehistoric times, and both tend to frequent forest and river edge habitats. Crab-eating macaques are occasionally used as a food source for some indigenous forest-dwelling peoples. In Mauritius, they are captured and sold to the pharmaceutical industry, and in Angaur island in Palau, they are sold as pets. Macaques feed on sugarcane and other crops, affecting agriculture and livelihoods, and can be aggressive towards humans. Macaques may carry potentially fatal human diseases, including herpes B virus. In Singapore, they have adapted into the urban environment.
In places like Thailand and Singapore human-macaque conflict task forces have been created to try and resolve some of these conflicts.
In scientific research
M. fascicularis is also used extensively in medical experiments, in particular those connected with neuroscience and disease. Due to their close physiology, they can share infections with humans. Some cases of concern have been an isolated event of Reston ebolavirus found in a captive-bred population shipped to the US from the Philippines, which was later found to be a strain of Ebola that has no known pathological consequences in humans, unlike the African strains. Furthermore, they are a known carrier of monkey B virus (Herpesvirus simiae), a virus which has produced disease in some lab workers working mainly with rhesus macaques (M. mulatta). Plasmodium knowlesi, which causes malaria in M. fascicularis, can also infect humans. A few cases have been documented in humans, but for how long humans have been getting infections of this malarial strain is unknown. It is, therefore, not possible to assess if this is a newly emerging health threat, or if just newly discovered due to improved malarial detection techniques. Given the long history of humans and macaques living together in Southeast Asia, it is likely the latter.
Crab-eating macaques are one of the most popular species used for scientific research. Crab-eating macaques are used primarily by the biotechnology and pharmaceutical industry in the evaluation of pharmacokinetics, pharmacodynamics, efficacy, and safety of new biologics and drugs, they are also used in infectious disease, TB, HIV/AIDS, and neuroscience studies.
The use of crab-eating macaques and other nonhuman primates in experimentation is controversial with critics charging that the experiments are cruel, unnecessary and lead to dubious findings. One of the most well known examples of experiments on crab-eating macaques is the 1981 Silver Spring monkeys case.
In 2014, 21,768 crab-eating macaques were imported in the United States to be used in experimentation.
Clones
On 24 January 2018, scientists in China reported in the journal Cell the creation of two crab-eating macaque clones, named Zhong Zhong and Hua Hua, using the complex DNA transfer method that produced Dolly the sheep.
Abuse scandal
In June 2023, BBC exposed a global online network of sadists who shared videos of baby long-tailed macaques being tortured by caretakers in Indonesia. There were many torture methods, from teasing the primates with baby bottles to killing them in blenders, sawing them in half, or cutting off their tails and limbs. Enthusiasts would pay for the caretakers to film videos torturing the macaques. Investigation has led to some prisons and police searches in both Indonesia and the United States, where many of the torture enthusiasts were located.
See also
Maggie the Macaque
Prostitution among animals
References
External links
Bonadio, C. 2000. "Macaca fascicularis" (On-line), Animal Diversity Web. Accessed March 10, 2006.
Primate Info Net Macaca fascicularis Factsheet
ISSG Database: Ecology of Macaca fascicularis
Primate Info Net: Macaca fascicularis
"Conditions at Nafovanny", video produced by the British Union for the Abolition of Vivisection following an undercover investigation at a captive-breeding facility for long-tailed macaques in Vietnam.
crab-eating macaque
Primates of Southeast Asia
Mammals of Oceania
Crab-eating macaque
Mammals of Bangladesh
Primates of Borneo
Mammals of Brunei
Mammals of Myanmar
Mammals of Cambodia
Mammals of Timor
Mammals of Indonesia
Mammals of Laos
Mammals of Malaysia
Mammals of the Philippines
Mammals of Singapore
Mammals of Thailand
Mammals of Vietnam
Mammals of Fiji
Mammals of Samoa
Mammals of Tonga
Crab-eating macaque
Crab-eating macaque
Crab-eating macaque
Articles containing video clips | Crab-eating macaque | Biology | 8,444 |
2,952,577 | https://en.wikipedia.org/wiki/Safety%20integrity%20level | In functional safety, safety integrity level (SIL) is defined as the relative level of risk-reduction provided by a safety instrumented function (SIF), i.e. the measurement of the performance required of the SIF.
In the functional safety standards based on the IEC 61508 standard, four SILs are defined, with SIL4 being the most dependable and SIL1 the least. The applicable SIL is determined based on a number of quantitative factors in combination with qualitative factors, such as risk assessments and safety lifecycle management. Other standards, however, may have different SIL number definitions.
SIL allocation
Assignment, or allocation of SIL is an exercise in risk analysis where the risk associated with a specific hazard, which is intended to be protected against by a SIF, is calculated without the beneficial risk reduction effect of the SIF. That unmitigated risk is then compared against a tolerable risk target. The difference between the unmitigated risk and the tolerable risk, if the unmitigated risk is higher than tolerable, must be addressed through risk reduction of provided by the SIF. This amount of required risk reduction is correlated with the SIL target. In essence, each order of magnitude of risk reduction that is required correlates with an increase in SIL, up to a maximum of SIL4. Should the risk assessment establish that the required SIL cannot be achieved by a SIL4 SIF, then alternative arrangements must be designed, such as non-instrumented safeguards (e.g, a pressure relief valve).
There are several methods used to assign a SIL. These are normally used in combination, and may include:
Risk matrices
Risk graphs
Layer of protection analysis (LOPA)
Of the methods presented above, LOPA is by far the most commonly used in large industrial facilities, such as for example chemical process plants.
The assignment may be tested using both pragmatic and controllability approaches, applying industry guidance such as the one published by the UK HSE. SIL assignment processes that use the HSE guidance to ratify assignments developed from Risk Matrices have been certified to meet IEC 61508.
Problems
There are several problems inherent in the use of safety integrity levels. These can be summarized as follows:
Poor harmonization of definition across the different standards bodies which utilize SIL.
Process-oriented metrics for derivation of SIL.
Estimation of SIL based on reliability estimates.
System complexity, particularly in software systems, making SIL estimation difficult to impossible.
These lead to such erroneous statements as the tautology "This system is a SIL N system because the process adopted during its development was the standard process for the development of a SIL N system", or use of the SIL concept out of context such as "This is a SIL 3 heat exchanger" or "This software is SIL 2". According to IEC 61508, the SIL concept must be related to the dangerous failure rate of a system, not just its failure rate or the failure rate of a component part, such as the software. Definition of the dangerous failure modes by safety analysis is intrinsic to the proper determination of the failure rate.
SIL types and certification
The International Electrotechnical Commission's (IEC) standard IEC 61508 defines SIL using requirements grouped into two broad categories: hardware safety integrity and systematic safety integrity. A device or system must meet the requirements for both categories to achieve a given SIL.
The SIL requirements for hardware safety integrity are based on a probabilistic analysis of the device. In order to achieve a given SIL, the device must meet targets for the maximum probability of dangerous failure and a minimum safe failure fraction. The concept of 'dangerous failure' must be rigorously defined for the system in question, normally in the form of requirement constraints whose integrity is verified throughout system development. The actual targets required vary depending on the likelihood of a demand, the complexity of the device(s), and types of redundancy used.
PFD (probability of dangerous failure on demand) and RRF (risk reduction factor) of low demand operation for different SILs as defined in IEC EN 61508 are as follows:
For continuous operation, these change to the following, where PFH is probability of dangerous failure per hour.
Hazards of a control system must be identified then analysed through risk analysis. Mitigation of these risks continues until their overall contribution to the hazard are considered acceptable. The tolerable level of these risks is specified as a safety requirement in the form of a target 'probability of a dangerous failure' in a given period of time, stated as a discrete SIL.
Certification schemes, such as the CASS Scheme (Conformity Assessment of Safety-related Systems) are used to establish whether a device meets a particular SIL. Third parties that can provide certification include Bureau Veritas, CSA Group, TÜV Rheinland, TÜV SÜD and UL among others. Self-certification is also possible. The requirements of these schemes can be met either by establishing a rigorous development process, or by establishing that the device has sufficient operating history to argue that it has been proven in use. Certification is achieved by proving the functional safety capability (FSC) of the organization, usually by assessment of its functional safety management (FSM) program, and the assessment of the design and life-cycle activities of the product to be certified, which is conducted based on specifications, design documents, test specifications and results, failure rate predictions, FMEAs, etc.
Electric and electronic devices can be certified for use in functional safety applications according to IEC 61508. There are a number of application-specific standards based on or adapted from IEC 61508, such as IEC 61511 for the process industry sector. This standard is used in the petrochemical and hazardous chemical industries, among others.
Standards
The following standards use SIL as a measure of reliability and/or risk reduction.
ANSI/ISA S84 (functional safety of safety instrumented systems for the process industry sector)
IEC 61508 (functional safety of electrical/electronic/programmable electronic safety related systems)
IEC 61511 (implementing IEC 61508 in the process industry sector)
IEC 61513 (implementing IEC 61508 in the nuclear industry)
IEC 62061 (implementing IEC 61508 in the domain of machinery safety)
EN 50128 (railway applications – software for railway control and protection)
EN 50129 (railway applications – safety related electronic systems for signalling)
EN 50657 (railway applications – software on board of rolling stock)
EN 50402 (fixed gas detection systems)
ISO 26262 (automotive industry)
MISRA (guidelines for safety analysis, modelling, and programming in automotive applications)
See also
As low as reasonably practicable (ALARP)
High-integrity pressure protection system (HIPPS)
Reliability engineering
Spurious trip level (STL)
References
Further reading
Hartmann, H.; Thomas, H.; Scharpf, E. (2022). Practical SIL Target Selection – Risk Analysis per the IEC 61511 Safety Lifecycle. Exida.
Houtermans, M.J.M. (2014). SIL and Functional Safety in a Nutshell (2nd ed.). Prime Intelligence. ASIN B00MTWSBG2
Medoff, M.; Faller, R. (2014). Functional Safety – An IEC 61508 SIL 3 Compliant Development Process (3rd ed.). Exida.
Punch, Marcus (2013). Functional Safety for the Mining and Machinery-based Industries (2nd ed.). Tenambit, N.S.W.: Marcus Punch.
External links
61508.org - The 61508 Association
Functional Safety, A Basic Guide
IEC Safety and functional safety - The IEC functional safety site
Safety Integrity Level Manual (Archived) - Pepperl+Fuchs SIL Manual
Process safety
Safety | Safety integrity level | Chemistry,Engineering | 1,641 |
11,407,637 | https://en.wikipedia.org/wiki/Adenocarcinoma%20in%20situ%20of%20the%20lung | Adenocarcinoma in situ (AIS) of the lung —previously included in the category of "bronchioloalveolar carcinoma" (BAC)—is a subtype of lung adenocarcinoma. It tends to arise in the distal bronchioles or alveoli and is defined by a non-invasive growth pattern. This small solitary tumor exhibits pure alveolar distribution (lepidic growth) and lacks any invasion of the surrounding normal lung. If completely removed by surgery, the prognosis is excellent with up to 100% 5-year survival.
Although the entity of AIS was formally defined in 2011, it represents a noninvasive form of pulmonary adenocarcinoma which has been recognized for some time. AIS is not considered to be an invasive tumor by pathologists, but as one form of carcinoma in situ (CIS). Like other forms of CIS, AIS may progress and become overtly invasive, exhibiting malignant, often lethal, behavior. Major surgery, either a lobectomy or a pneumonectomy, is usually required for treatment.
Causes
The genes mutated in AIS differ based on exposure to tobacco smoke. Non-smokers with AIS commonly have mutations in EGFR (a driver) or HER2 (an important oncogene), or a gene fusion with ALK or ROS1 as one of the elements.
Mechanism
Nonmucinous AIS is thought to derive from a transformed cell in the distal airways and terminal respiratory units, and often shows features of club cell or Type II pneumocyte differentiation. Mucinous AIS, in contrast, probably derives from a transformed glandular cell in distal bronchioles.
A multi-step carcinogenesis hypothesis suggests a progression from pulmonary atypical adenomatous hyperplasia (AAH) through AIS to invasive adenocarcinoma (AC), but to date this has not been formally demonstrated.
Type-I cystic adenomatoid malformation (CAM) has recently been identified as a precursor lesion for the development of mucinous AIS, but these cases are rare.
Rarely, AIS may develop a rhabdoid morphology due to the development of dense perinuclear inclusions.
Diagnosis
The criteria for diagnosing pulmonary adenocarcinoma have changed considerably over time. The 2011 IASLC/ATS recommendations, adopted in the 2015 WHO guidelines, use the following criteria for adenocarcinoma in situ:
tumor ≤3 cm
solitary tumor
pure "lepidic" growth*
No stromal, vascular, or pleural invasion
No histologic patterns of invasive adenocarcinoma
No spread through air spaces
Cell type mostly nonmucinous
Minimal/absent nuclear atypia
± septal widening with sclerosis/elastosis
* lepidic = (i.e. scaly covering) growth pattern along pre-existing airway structures
By this standard, AIS cannot be diagnosed according to core biopsy or cytology sampling. Recommended practice is to report biopsy findings previously classified as nonmucinous BAC as adenocarcinoma with lepidic pattern, and those previously classified as mucinous BAC as mucinous adenocarcinoma.
Classification
The most recent 2015 World Health Organization (WHO) and 2011 International Association for the Study of Lung Cancer (IASLC) / American Thoracic Society (ATS) guidelines refine pulmonary adenocarcinoma subtypes in order to correspond to advances in personalized cancer treatment.
AIS is considered a pre-invasive malignant lesion that, after further mutation and progression, is thought to progress into an invasive adenocarcinoma. Therefore, it is considered a form of carcinoma in situ (CIS).
There are other classification systems that have been proposed for lung cancers. The Noguchi classification system for small adenocarcinomas has received considerable attention, particularly in Japan, but has not been nearly as widely applied and recognized as the WHO system.
AIS may be further subclassified by histopathology, by which there are two major variants:
mucinous (20–25% of cases)
non mucinous (75–80% of cases)
Treatment
This information is mostly in reference to the now outdated entity of BAC, which included some invasive forms of disease.
The treatment of choice in any patient with BAC is complete surgical resection, typically via lobectomy or pneumonectomy, with concurrent ipsilateral lymphadenectomy.
Non-mucinous BAC are highly associated with classical EGFR mutations, and thus are often responsive to targeted chemotherapy with erlotinib and gefitinib. K-ras mutations are rare in nm-BAC.
Mucinous BAC, in contrast, is much more highly associated with K-ras mutations and wild-type EGFR, and are thus usually insensitive to the EGFR tyrosine kinase inhibitors. In fact, there is some evidence that suggests that the administration of EGFR-pathway inhibitors to patients with K-ras mutated BAC may even be harmful.
Prognosis
This information is mostly in reference to the now outdated entity of BAC, which included some invasive forms of disease.
Taken as a class, long-term survival rates in BAC tend to be higher than those of other forms of NSCLC. BAC generally carries a better prognosis than other forms of NSCLC, which can be partially attributed to localized presentation of the disease. Though other factors might play a role. Prognosis of BAC depends upon the histological subtype and extent at presentation but are generally same as other NSCLC.
Recent research has made it clear that nonmucinous and mucinous BAC are very different types of lung cancer. Mucinous BACis much more likely to present with multiple unilateral tumors and/or in a unilateral or bilateral pneumonic form than nonmucinous AIS . The overall prognosis for patients with mucinous AIS is significantly worse than patients with nonmucinous AIS .
Although data are scarce, some studies suggest that survival rates are even lower in the mixed mucinous/non-mucinous variant than in the monophasic forms.
In non-mucinous BAC, neither club cell nor type II pneumocyte differentiation appears to affect survival or prognosis.
Recurrence
When BAC recurs after surgery, the recurrences are local in about three-quarters of cases, a rate higher than other forms of NSCLC, which tends to recur distantly.
Epidemiology
Information about the epidemiology of AIS is limited, due to changes in definition of this disease and separation from BAC category.
Under the new, more restrictive WHO criteria for lung cancer classification, AIS is now diagnosed much less frequently than it was in the past. Recent studies suggest that AIS comprises between 3% and 5% of all lung carcinomas in the U.S.
Incidence
The incidence of bronchiolo-alveolar carcinoma has been reported to vary from 4–24% of all lung cancer patients. An analysis of Surveillance epidemiology and End results registry ( SEER) by Read et al. revealed that although the incidence of BAC has increased over the past two decade it still constitutes less than 4% of NSCLC in every time interval. This difference in the incidence has been attributed to complex histopathology of cancer. While pure BAC is rare, the increase in incidence as seen in various studies can be due to unclear histological classification till WHO came up with its classification in 1999 and then in 2004.
Another distinguishing feature about BAC is that it afflicts men and women in equal proportions, some recent studies even suggest slightly higher incidence among women.
History
The criteria for classifying lung cancer have changed considerably over time, becoming progressively more restrictive.
In 2011, the IASLC/ATS/ERS classification recommended discontinuing the BAC classification altogether, as well as the category of mixed subtype adenocarcinoma. This change was made because the term BAC was being broadly applied to small solitary noninvasive tumors, minimally invasive adenocarcinoma, mixed subtype invasive adenocarcinoma, and even widespread disease. In addition to creating the new AIS and minimally-invasive categories, the guidelines recommend new terminology to clearly denote predominantly-noninvasive adenocarcinoma with mild invasion (lepidic predominant adenocarcinoma), as well as invasive mucinous adenocarcinoma in place of mucinous BAC.
Additional images
Mucinous BAC
Non-mucinous BAC
See also
Atypical adenomatous hyperplasia of the lung
Minimally invasive adenocarcinoma of the lung
Adenocarcinoma of the lung
References
External links
Rare cancers
Lung cancer
Histopathology | Adenocarcinoma in situ of the lung | Chemistry | 1,896 |
48,139,617 | https://en.wikipedia.org/wiki/Interstellar%20Probe%20%281999%29 | Interstellar Probe is the name of a 1999 space probe concept by NASA intended to travel out 200 AU in 15 years. This 1999 study by Jet Propulsion Laboratory is noted for its circular 400-meter-diameter solar sail as a propulsion method (1 g/m2) combined with a 0.25 AU flyby of the Sun to achieve higher solar light pressure, after which the sail is jettisoned at 5 AU distance from the Sun.
Solar sail
Solar sails work by converting the energy in light into a momentum on the spacecraft, thus propelling the spacecraft. Felix Tisserand noted the effect of light pressure on comet tails in the 1800s.
The study by the NASA Jet Propulsion Laboratory proposed using a solar sail to accelerate a spacecraft to reach the interstellar medium. It was planned to reach as far as 200 AU within 10 years at a speed of 14 AU/year (about 70 km/s) and function up to 400+ AU. A critical technology for the mission is a large 1 g/m2 solar sail.
In the following years there were additional studies, including the Innovative Interstellar Explorer (published 2003), which focused on a design using RTGs powering an ion engine rather than a solar sail. Another project in this field for advanced spaceflight during this period was the Breakthrough Propulsion Physics Program which ran from 1996 through 2002.
Later examples of solar sail-propelled spacecraft include IKAROS, Nanosail-D2, and LightSail. Near-Earth Asteroid Scout is a planned light sail-propelled mission. For comparison, the LightSail spacecraft uses a sail 5 micron in thickness, whereas they predict a sail with 1 micron thickness would be needed for interstellar travel.
Other design features
The probe would use an advanced radioisotope thermoelectric generator (RTG) for electrical power, Ka band radio for communication with Earth, a Delta 2 rocket for Earth launch, and a 25 kg instrument package using 20 watts.
Objectives
Historical view of region
See also
Interstellar probe (generic)
Breakthrough Starshot, a fleet of small light sail spacecraft
TAU (spacecraft) (1980s era interstellar precursor and astrometry probe)
Stardust (spacecraft) (Believed to have collected some interstellar micro-dust)
Interstellar Boundary Explorer (Space observatory that detects neutral atoms from beyond)
References
External links
NASA - Interstellar Probe
Proposed space probes
Interstellar travel
1999 in science | Interstellar Probe (1999) | Astronomy | 485 |
12,737,059 | https://en.wikipedia.org/wiki/Mansfield%20and%20Sutton%20Astronomical%20Society | Mansfield and Sutton Astronomical Society (MSAS) is an amateur astronomical society in the East Midlands of England. It was formed in 1969. It is based at Sherwood Observatory, a 61 cm mirror telescope which it owns and operates. The observatory lies 4 km south west of the centre of Mansfield on one of the highest points in the county of Nottinghamshire.
The society is a member of The Federation of Astronomical Societies.
Aims
The aims of the society are to:
further the interests of Astronomy and related subjects within the local community
introduce the public to the subject of Astronomy
provide a forum for education in Astronomy and observational techniques through a collaboration with the University of Nottingham
provide members with the best observational equipment possible.
Meetings
The society holds monthly members-only lecture meetings at the observatory, along with observing and training evenings for members.
Outreach
The society runs a night school for those who wish to learn about astronomy and the universe. These are usually held at the observatory on Friday evenings.
Funding
MSAS is a registered charity. It is funded through member subscriptions, fund raising events, public open evenings held at the society's Observatory, charitable donations and grants.
Patrons
The patrons of the society are:
Professor Sir Francis Graham Smith, 13th Astronomer Royal (1982-1990).
Professor Michael R. Merrifield, School of Physics and Astronomy, University of Nottingham.
See also
List of astronomical societies
References
External links
Mansfield and Sutton Astronomical Society Official website.
Mansfield & Sutton Astronomical Society Yahoo Distribution Group.
Federation of Astronomical Societies (FAS).
Amateur astronomy organizations
Ashfield District
British astronomy organisations
Mansfield District
Organisations based in Nottinghamshire
Science and technology in Nottinghamshire | Mansfield and Sutton Astronomical Society | Astronomy | 326 |
28,037,308 | https://en.wikipedia.org/wiki/Newman%E2%80%93Kwart%20rearrangement | The Newman–Kwart rearrangement is a type of rearrangement reaction in which the aryl group of an O-aryl thiocarbamate, ArOC(=S)NMe2, migrates from the oxygen atom to the sulfur atom, forming an S-aryl thiocarbamate, ArSC(=O)NMe2. The reaction is named after its discoverers, Melvin Spencer Newman and Harold Kwart. The reaction is a manifestation of the double bond rule. The Newman–Kwart reaction represents a useful synthetic tool for the preparation of thiophenol derivatives.
Mechanism
The Newman–Kwart rearrangement is intramolecular. It is generally believed to be a concerted process, proceeding via a four-membered cyclic transition state (rather than a two-step process passing through a discrete reactive intermediate). The enthalpy of activation for this transition state is generally quite high for typical substrates (ΔH‡ ~ 30 to 40 kcal/mol), necessitating high reaction temperatures (200 to 300 °C, Ph2O as solvent or heat).
A Pd-catalyzed process and conditions under photoredox catalysis (both proceeding through complex multistep mechanisms) are known. These catalytic processes allow for much milder reaction conditions to be used (100 °C for Pd catalysis, ambient temperature for photoredox).
Use for preparation of thiophenols
The Newman–Kwart rearrangement is an important prelude to the synthesis of thiophenols. A phenol (1) is deprotonated with a base followed by treatment with a thiocarbamoyl chloride (2) to form an O-aryl thiocarbamate (3). Heating 3 to around 250 °C causes it undergo Newman–Kwart rearrangement to an S-aryl thiocarbamate (4). Alkaline hydrolysis or similar cleavage yields a thiophenol (5).
See also
Smiles rearrangement
Chapman rearrangement
References
Rearrangement reactions
Name reactions | Newman–Kwart rearrangement | Chemistry | 435 |
5,739,723 | https://en.wikipedia.org/wiki/Critical%20group | In mathematics, in the realm of group theory, a group is said to be critical if it is not in the variety generated by all its proper subquotients, which includes all its subgroups and all its quotients.
Any finite monolithic A-group is critical. This result is due to Kovacs and Newman. But not every monolithic group is critical.
The variety generated by a finite group has a finite number of nonisomorphic critical groups.
References
Properties of groups | Critical group | Mathematics | 102 |
15,559,385 | https://en.wikipedia.org/wiki/Tactile%20discrimination | Tactile discrimination is the ability to differentiate information through the sense of touch. The somatosensory system is the nervous system pathway that is responsible for this essential survival ability used in adaptation. There are various types of tactile discrimination. One of the most well known and most researched is two-point discrimination, the ability to differentiate between two different tactile stimuli which are relatively close together. Other types of discrimination like graphesthesia and spatial discrimination also exist but are not as extensively researched. Tactile discrimination is something that can be stronger or weaker in different people and two major conditions, chronic pain and blindness, can affect it greatly. Blindness increases tactile discrimination abilities which is extremely helpful for tasks like reading braille. In contrast, chronic pain conditions, like arthritis, decrease a person's tactile discrimination. One other major application of tactile discrimination is in new prosthetics and robotics which attempt to mimic the abilities of the human hand. In this case tactile sensors function similarly to mechanoreceptors in a human hand to differentiate tactile stimuli.
Pathways
Somatosensory system
The somatosensory system includes multiple types of sensations from the body. This includes light, touch, pain, pressure, temperature, and joint /muscle sense. Each of these are categorized in three different areas: discriminative touch, pain and temperature, and proprioception. Discriminative touch includes touch, pressure, being able to recognize vibrations, etc. Pain and temperature includes the perception of pain/ amounts of pain and the severity of temperatures. The pain and temperature category of sensations also includes itching and tickling. Proprioception includes receptors for everything that occurs below the surface of the skin. This includes sensations on various muscles, joints, and tendons. Each of these three categories have their own types of pathways and receptors. These pathways target the cerebellum in the brain. This section of the brain tracks what the muscles are doing at all times so any potential damage to this area can greatly affect one's senses.
Within each Somatosensory pathway there are three types of neurons: the pseudounipolar neuron, secondary afferentme neurons, and tertiary afferent neurons. There are also slowly adapting receptors that signify the receptors that sense the indents made on the skin. Rapidly adapting receptors are also present in this system. An example of a slowly adapting receptor in use is when a person breaks his/her arm, the arm is immobilized until it is healed. He/she does not want to forget that it is broken and do something that could potentially worsen the damage in the arm. An example of a rapid adapting receptor in use is putting on clothes. Initially you will feel the clothes being worn, but after a while you forget you are wearing clothes. It is not at the forefront of the brain to focus on the feeling of the clothes on your body; however, if you were to concentrate on that feeling, you could instantly feel the contact between your skin and the clothing being worn.
Discriminative touch system
The discriminative touch system deals with everything from the toes to the neck through the spinal cord. The sensation experienced enters the periphery by axons. More specifically, the sensory axons. This signal passes through axon to axon from the distal to proximal process. The proximal end of the specific axon leads into the spinal cord on the dorsal half. This then moves towards the brain. These axons that are leading the signal towards the spinal cord to the brain are classified as primary afferents. This makes sense as afferent is defined conducting toward something. These neurons are sending signals towards the brain. Those that receive the neuron synapses are classified as secondary afferents. These neurons go to the thalamus and then synapsed towards another set of neurons that move towards the cerebral cortex.
Types of receptors
There are many types of receptors in the somatosensory pathway including:
Peripheral Mechanoreceptors - Activation of these receptors is the initial step of recognizing a stimulation. An indentation, as stated before, becomes an electrical signal in the peripheral process of a primary afferent neuron. This creates a depolarization across the membrane of the neuron and this leads to an action potential that goes to the cerebellum of the brain to initiate an action.
Merkel's Disks - Located on the upper part of the dermis. Slow adapting receptors. Found on the fingertips as well as the eyelids.
Meissner's Corpuscles - Located also on the upper part of the dermis. Found on the hairless skin including the lips as well as the eyelids. These are rapid adapting receptors.
Thermoreceptors - These are receptors that are able to detect heat. There are actually 2 types of these receptors in mammals. One that can identify heat higher than body temperature and vice versa!
Types of tactile discrimination
Stereognosis
Stereognosis (Tactile Gnosis) is defined as the ability to tell the difference and identify objects via touch in the absence of visual or auditory contact. The subject will need to be able to recognize temperature, spatial properties, texture, and size to reach an accurate conclusion to what the object is. This type of tactile discrimination will give an indication of the status of the Parietal lobe of the brain. When conducting this test, common objects that the subject is familiar with are used in order to ensure an accurate reading, and consistency amongst multiple tests with multiple different subjects. By utilizing this form of tactile discrimination, practitioners will be able to detect and track the presence or effects of Neurodegenerative Diseases such as Alzheimer's disease due to Astereognosis, which is the failure to recognize objects via touch without visual recognition.
Graphesthesia
Graphesthesia is the ability in which a person is able to recognize a number or letter that is written on the person's skin. Like other tactile discrimination tests, the test for this is a measurement of the patient's sense of touch, and requires that the patient perform the test voluntarily and without visual contact. The purpose of this form of tactile discrimination is to detect any defects in the Central nervous system such as lesions in the Brainstem, Spinal cord, Thalamus, or Sensory cortex. In order for this test to be carried out successfully, it is imperative that the subject's primary sensations be fully functional. A severe lesion in the Central Nervous System would suggest a loss in primary sensation. It is also important that the practitioner and the patient communicate ahead of time about the orientation of the characters, as well as where on the body the figures are to be drawn (usually on the palms of the hand). In order to make this tactile discrimination more flexible, the patient may select the correct answer from a series of images in lieu of communicating verbally if the patient suffers from a speech or language impairment. The Graphesthesia test is also more versatile than the Stereognosis test since it doesn't require the patient to be able to grasp an object.
Two-point discrimination
Two-point discrimination (2PD) is a neurological examination in which two sharp points are applied to the surface of a part of the body in order to see if the patient recognizes them as two discrete sensations. The two-point threshold is the smallest distance between the two points that the patient can recognize. By conducting this form of tactile discrimination, it is believed that practitioners will be able to discern the relative amount of nerves in the tested location. When conducting the procedure on the desired part of the body, the practitioner may apply both points simultaneously or with just one point. The practitioner may switch between the two at random. In order for the examination to be conducted in the most proper fashion, it is imperative that there be clear and open communication between the subject and the practitioner with the subject being fully conscious and not under any sort of influence while at the same time not making visual contact with the device. The efficacy of Two-point discrimination has come under scrutiny from many researchers despite being commonly used to this day in a clinical setting. Research studies have shown that the 2PD test does a poor job of determining the degree to which the nerves regain their function after damage, as well as determining the sensory failures in the first place, owing to this form of tactile discrimination's simplicity, crudeness, and dependence on anecdotal evidence. The research studies have also shown that there is a discrepancy between the data obtained from 2PD tests and data obtained from other tests used to measure tactile spatial acuity.
Spatial discrimination
Spatial discrimination is another form of Two-point discrimination in which the practitioner tests for innervation of the skin with two blunt points of a compass (drawing tool). Just with like 2PD, the patient must be able to discriminate between the two applied points. All other parameters, methods, and objectives of the Spatial discrimination tactile discrimination and the 2PD tactile discrimination remain the same.
Applications
Blindness
When a person has become blind, in order to “see” the world, their other senses become heightened. An important sense for the blind is their sense of touch, which becomes more frequently used to help them perceive the world. People that are blind have displayed that their visual cortices become more responsive to auditory and tactile stimulation. Braille allows the blind to be able to use their sense of touch to feel the roughness, and distance of various patterns to be used as a form of language. Within the brain, the activation of the occipital cortex is functionally relevant for tactile braille reading, as well as the somatosensory cortex. These various parts of the brain function in their own way, in which they each contribute to the effectiveness of how braille is read by the blind. People that are blind also rely heavily on Tactile Gnosis, Spatial discrimination, Graphesthesia, and Two-point discrimination. Essentially, the occipital cortex allows one to effectively make judgements on the distance of braille patterns, which is related to spatial discrimination. Meanwhile, the somatosensory cortex allows one to effectively make judgements on the roughness of braille patterns, which is related to two-point discrimination. The various visual areas in the brain are very essential for a blind person to read braille, just as much as it is for a person that has sight. Essentially, whether one is blind or not, the perception of objects that involves tactile discrimination is not impaired if one cannot see. When comparing people that are blind to people that have sight, the amount of activity within their somatosensory and visual areas of the brain do differ. The activity in the somatosensory and visual areas are not as high in tactile gnosis for people that are not blind, and are more-so active for more visual related stimuli that does not involve touch. Nonetheless, there is a difference in these various areas within the brain when comparing the blind to the sighted, which is that shape discrimination causes a difference in brain activity, as well as tactile gnosis. The visual cortices of blind individuals are active during various vision related tasks including tactile discrimination, and the function of the cortices resemble the activity of adults with sight.
Chronic pain
Some non-neuropathic chronic pain conditions have been shown to decrease tactile acuity, the ability to precisely detect touch. There is a difference between different chronic pain conditions and how they affect tactile acuity deficits. One of the conditions with the most profound deficits in tactile acuity is arthritis. This condition affects the tactile acuity both at the site of the pain and at remote locations away from the pain. This suggests that the deficit may be a result of a cortical reorganization, or cortical remapping in the patient's brain. Other conditions, like complex regional pain syndrome and chronic lower back pain, show deficits only at the site of pain. Still other conditions like burning mouth syndrome shows no deficit in tactile acuity at all. Although there is evidence that some chronic pain conditions cause a decrease in tactile acuity there is no evidence to suggest when this deficit becomes clinically meaningful and affects the function of the patient.
Robotic tactile discrimination
As robots and prosthetic limbs become more complex the need for sensors capable of detecting touch with high tactile acuity becomes more and more necessary. There are many types of tactile sensors used for different tasks. There are three types of tactile sensors. The first, single point sensors, can be compared to a single cell, or whiskers, and can detect very local stimuli. The second type of sensor is a high spatial resolution sensor which can be compared to a human fingertip and is essential for the tactile acuity in robotic hands. The third and final tactile sensor type is a low spatial resolution sensor which has similar tactile acuity as the skin on one's back or arm. These sensors can be placed meaningfully throughout the surface of a prosthetic or a robot to give it the ability to sense touch in similar, if not better, ways than the human counterpart.
References
Perception | Tactile discrimination | Physics | 2,749 |
2,555,833 | https://en.wikipedia.org/wiki/Crypto%20API%20%28Linux%29 | Crypto API is a cryptography framework in the Linux kernel, for various parts of the kernel that deal with cryptography, such as IPsec and dm-crypt. It was introduced in kernel version 2.5.45 and has since expanded to include essentially all popular block ciphers and hash functions.
Userspace interfaces
Many platforms that provide hardware acceleration of AES encryption expose this to programs through an extension of the instruction set architecture (ISA) of the various chipsets (e.g. AES instruction set for x86). With this sort of implementation, any program (kernel-mode or user-space) may utilize these features directly.
Some platforms, such as the ARM Kirkwood SheevaPlug and AMD Geode processors, however, are not implemented as ISA extensions, and are only accessible through kernel-mode drivers. In order for user-mode applications that utilize encryption, such as wolfSSL, OpenSSL or GnuTLS, to take advantage of such acceleration, they must interface with the kernel.
AF_ALG
A netlink-based interface that adds an AF_ALG address family; it was merged into version 2.6.38 of the Linux kernel mainline. There was once a plugin to OpenSSL to support AF_ALG, which has been submitted for merging. In version 1.1.0, OpenSSL landed another patch for AF_ALG contributed by Intel. wolfSSL can make use of AF_ALG and
cryptodev
The OpenBSD Cryptographic Framework /dev/crypto interface of OpenBSD was ported to Linux, but never merged.
See also
Microsoft CryptoAPI
References
Application programming interfaces
Cryptographic software
Linux security software
Linux kernel features | Crypto API (Linux) | Mathematics | 353 |
8,930,340 | https://en.wikipedia.org/wiki/Cacls | In Microsoft Windows, cacls, and its replacement icacls, are native command-line utilities that can display and modify the security descriptors on files and folders. An access-control list is a list of permissions for securable object, such as a file or folder, that controls who can access it. The cacls command is also available on ReactOS.
cacls
The cacls.exe utility is a deprecated command line editor of directory and file security descriptors in Windows NT 3.5 and later operating systems of the Windows NT family. Microsoft has produced the following newer utilities, some also subsequently deprecated, that offer enhancements to support changes introduced with version 3.0 of the NTFS filesystem:
xcacls.exe is supported by Windows 2000 and later and adds new features like setting Execute, Delete and Take Ownership permissions
xcacls.vbs
fileacl.exe
icacls.exe (included in Windows Server 2003 SP2 and later)
SubInAcl.exe - Resource Kit utility to set and replace permissions on various type of objects including files, services and registry keys
Windows PowerShell (Get-Acl and Set-Acl cmdlets)
The ReactOS version was developed by Thomas Weidenmueller and is licensed under the GNU Lesser General Public License.
icacls
Stands for Integrity Control Access Control List. Windows Server 2003 Service Pack 2 and later include icacls, an in-box command-line utility that can display, modify, backup and restore ACLs for files and folders, as well as to set integrity levels and ownership in Vista and later versions. It is not a complete replacement for cacls, however. For example, it does not support Security Descriptor Definition Language (SDDL) syntax directly via command line parameters (only via the /restore option).
See also
SetACL
chmod
takeown
References
Further reading
The Security Descriptor Definition Language of Love (Part 1)
External links
cacls | Microsoft Docs
icacls | Microsoft Docs
ReactOS commands
fr:Cacls | Cacls | Technology | 446 |
71,815,747 | https://en.wikipedia.org/wiki/Y%20Cygni | Y Cygni is an eclipsing and double-lined spectroscopic binary star system in the constellation of Cygnus. It is located about from Earth. The system was one of the first binaries with a convincing detection of the apsidal precession.
The two stars, being O-type main-sequence stars, orbit each other with a period of nearly 3 days.
Observation history
The early type of Y Cyg made it a popular target for astronomers in the past, and spectroscopic orbits have been historically computed numerous times. The first of these studies was published in 1920 by John Stanley Plaskett. Extensive spectroscopic studies of Y Cyg were carried out as early as 1930. Several follow-ups to these have been published in 1959, 1971, and 1980. The latter of these contained an estimate of the period of apsidal precession.
References
Cygnus (constellation)
Binary stars
Cygni, Y
198846
102999
O-type main-sequence stars
Algol variables | Y Cygni | Astronomy | 210 |
40,084,830 | https://en.wikipedia.org/wiki/Runoff%20footprint | A runoff footprint is the total surface runoff that a site produces over the course of a year. According to the United States Environmental Protection Agency (EPA) stormwater is "rainwater and melted snow that runs off streets, lawns, and other sites". Urbanized areas with high concentrations of impervious surfaces like buildings, roads, and driveways produce large volumes of runoff which can lead to flooding, sewer overflows, and poor water quality. Since soil in urban areas can be compacted and have a low infiltration rate, the surface runoff estimated in a runoff footprint is not just from impervious surfaces, but also pervious areas including yards. The total runoff is a measure of the site’s contribution to stormwater issues in an area, especially in urban areas with sewer overflows. Completing a runoff footprint for a site allows a property owner to understand what areas on his or her site are producing the most runoff and what scenarios of stormwater green solutions like rain barrels and rain gardens are most effective in mitigating this runoff and its costs to the community.
Significance
The runoff footprint is the stormwater equivalent to the carbon/energy footprint. When homeowners or business owners complete an energy audit or carbon footprint, they understand how they are consuming energy and learn how this consumption can be reduced through energy efficiency measures. Correspondingly, the runoff footprint allows someone to calculate their baseline annual runoff and assess what the impact of ideal stormwater green solutions would be for their site. Since the passage of the Clean Water Act in 1972, the EPA has monitored and regulated stormwater issues in urban areas. Municipalities across the United States are now required to upgrade sanitary and stormwater systems to meet EPA mandates. The total cost for these upgrades across the United States exceeds $3000 billion. The stormwater runoff from every property in an area can contribute to the overall stormwater issues including overflows and water pollution. Stormwater runoff carries nonpoint source pollution which is a leading cause of water quality issues.
By completing a runoff footprint, homeowners and business owners can consider how stormwater green solutions can reduce runoff on-site. Stormwater green solutions (also called green infrastructure) use "vegetation, soils, and natural processes to manage water and create healthier urban environments. At the scale of a city or county, green infrastructure refers to the patchwork of natural areas that provides habitat, flood protection, cleaner air, and cleaner water. At the scale of a neighborhood or site, green infrastructure refers to stormwater management systems that mimic nature by soaking up and storing water". Stormwater green solutions include bioswales (directional rain gardens), cisterns, green roofs, permeable pavement, rain barrels, and rain gardens. According to the EPA, onsite stormwater green solutions or low-impact developments (LIDs) can significantly reduce runoff and costly stormwater/sewer infrastructure upgrades.
Stormwater green solutions can also reduce energy consumption. Treating and pumping water is an energy-intensive activity. According to the River Network, the U.S. consumes at least 521 million MWh a year for water-related purposes which is the equivalent to 13% of the nation’s electricity consumption Potable water must be treated and then pumped to the consumer. Wastewater is treated before being discharged. In areas with combined sewer systems or old separate sewer systems with high inflow and infiltration, stormwater is also treated at the wastewater treatment facilities. By capturing stormwater runoff onsite in rain barrels and cisterns, the consumption of potable water for irrigation and its corresponding energy impact can be reduced. The reduction of runoff from all types of stormwater green solutions reduces the stormwater that may end up at the wastewater treatment facility in areas with combined sewer systems or old separate sewers.
Completing a runoff footprint
There are number of methods available to complete a runoff footprint. The simplest methods involve using a runoff coefficient, which according to the State Water Resources Control Board of California is "a dimensionless coefficient relating the amount of runoff to the amount of precipitation received. It is a larger value for areas with low infiltration and high runoff (pavement, steep gradient), and lower for permeable, well vegetated areas (forest, flat land)." The runoff coefficients for different surface types on a site can be multiplied with the area for each surface along with the annual precipitation to generate a rough runoff footprint. If the runoff coefficient and areas of proposed stormwater green solutions like rain gardens and bioswales for the site are known, the reduction in overall runoff from these improvements can be estimated.
More accurate runoff footprint tools exist. By using computer modeling and detailed weather data, complex runoff footprints can be made easy. The amounts of pollution in the stormwater runoff can be estimated, and the effects of combinations of stormwater green solutions can be assessed. The James River Association of central Virginia provides an online tool where property owners in the James River watershed can generate a site-specific runoff pollution report. MyRunoff.org provides an online runoff footprint calculator for property owners across the United States to estimate their baseline runoff and the reduction from different scenarios of rain barrels and rain gardens. The EPA launched the National Stormwater Calculator in July, 2013, which is a desktop application for Windows allowing users to model the annual impact of a range of stormwater green solutions.
References
External links
MyRunoff.org's Runoff Footprint Calculator
EPA's National Stormwater Calculator
James River Association Runoff Calculator
Water supply
Water pollution
Water and the environment
Environmental engineering
Hydrology and urban planning
Landscape
Sustainable urban planning | Runoff footprint | Chemistry,Engineering,Environmental_science | 1,138 |
2,922,292 | https://en.wikipedia.org/wiki/11%20Canis%20Majoris | 11 Canis Majoris is a single star in the southern constellation of Canis Major, the eleventh entry in John Flamsteed's catalogue of stars in that constellation. It has a blue-white hue and is visible to the naked eye with an apparent visual magnitude of 5.28. The distance to this star is approximately 1,010 light years from the Sun based on parallax, and it is drifting further away with a radial velocity of around +15 km/s. It has an absolute magnitude of −1.63.
This star has a stellar classification of B8/9III, matching a B-type star that is in the giant stage. It has a high rate of spin with a projected rotational velocity of 130 km/s. The star is radiating 485 times the luminosity of the Sun from its photosphere at an effective temperature of 11,540 K.
References
B-type giants
Canis Major
Durchmusterung objects
Canis Majoris, 11
049229
032492
2504 | 11 Canis Majoris | Astronomy | 216 |
26,958,940 | https://en.wikipedia.org/wiki/Daflon | Daflon is an oral micronized purified phlebotonic flavonoid fraction containing 90% diosmin and 10% hesperidin. It is manufactured by Laboratoires Servier and often used to treat or manage disorders of the blood vessels. Flavonoids are a type of phytochemical that have been associated with various effects on human health and are a component of many different pharmaceutical, nutraceutical, and cosmetic preparations. Diosmin is a flavone glycoside that is derived from hesperidin. Hesperidin is a flavone that is extracted from citrus fruits.
Vein diseases and hemorrhoids
Daflon is not an FDA-approved medication, and therefore it cannot be advertised for treatment of diseases in the United States. Daflon is under preliminary research for its potential use in treating vein diseases, or hemorrhoids. It is sold as a drug in France, Spain, Malaysia and Belgium.
There is moderate certainty evidence for the effectiveness of daflon for slightly reducing oedema compared to placebo in the treatment of chronic venous insufficiency. Little to no differences in quality of life after treatment with Daflon were found and there is low certainty evidence that this class of drugs do not influence ulcer healing. Diosmiplex, a micronized purified flavonoid fraction of daflon, with similar venous insufficiency indication, is sold as a prescription medical food in the US.
Pharmacological activity
Daflon plays a crucial role in the prevention of perivascular edema formation and treatment of venous stasis. This activity can be explained by its antagonist activity against prostaglandin E2 (PgE2) and thromboxane (TxA2) biosynthesis leading to inhibition of inflammatory process. Moreover, it also has a contraction activity on the lymphatic vessels which cause the lymphatic flow maximal.
Dosage
For venous insufficiency, the dosage is 2 tablets of 500mg daily. For acute hemorrhoidal attack, the dosage is 6 tablets daily for 4 days, followed by 4 tablets daily over the next 3 days. For chronic venous disease, the dosage is 2 tablets a day for at least 2 months.
Side effects
Possible side effects include routine gastric disorders and neurovegetative disorders, however, toxicology studies indicate that diosmin is quite safe. Diosmin interacts in an inhibitory manner with some metabolic enzymes so drug-interactions are probable.
References
External links
Official website France
Official website Philippines
Flavonoids
Drugs acting on the cardiovascular system
Drug brand names | Daflon | Chemistry | 564 |
7,640,624 | https://en.wikipedia.org/wiki/Metro%20WSIT | Web Services Interoperability Technology (WSIT) is an open-source project started by Sun Microsystems to develop the next-generation of Web service technologies. It provides interoperability between Java Web Services and Microsoft's Windows Communication Foundation (WCF).
It consists of Java programming language APIs that enable advanced WS-* features to be used in a way that is compatible with Microsoft's Windows Communication Foundation as used by .NET. The interoperability between different products is accomplished by implementing a number of Web Services specifications, like JAX-WS that provides interoperability between Java Web Services and Microsoft Windows Communication Foundation.
WSIT is currently under development as part of Eclipse Metro.
WSIT is a series of extensions to the basic SOAP protocol, and so uses JAX-WS and JAXB. It is not a new protocol such as the binary DCOM.
WSIT implements the WS-I specifications, including:
Metadata
WS-MetadataExchange
WS-Transfer
WS-Policy
Security
WS-Security
WS-SecureConversation
WS-Trust
WS-SecurityPolicy
Messaging
WS-ReliableMessaging
WS-RMPolicy
Transactions
WS-Coordination
WS-AtomicTransaction
See also
JAX-WS
References
External links
Sun Developer Network's WSIT page
WS-I and WSIT - What's the difference?
java.net project pages
WSIT java.net project page
GlassFish java.net project page
JAX-WS java.net project page
WSIT documentation
WSIT Tutorial
WS-I information
WS-I home page
Specifications
WS-MetadataExchange
WS-Transfer
WS-Security
WS-SecureConversation
WS-SecurityPolicy
WS-Trust
WS-ReliableMessaging
WS-RMPolicy
WS-Coordination
WS-AtomicTransaction
WS-Policy
WS-PolicyAttachment
A general framework, applicable but not limited to Web services, for interoperation of model-based services is described at
Interoperability
Java enterprise platform
Web services | Metro WSIT | Engineering | 439 |
69,233,992 | https://en.wikipedia.org/wiki/Mercedes-Benz%20M21%20engine | The Mercedes-Benz M21 engine is a naturally-aspirated, 2.0-liter, straight-6, internal combustion piston engine, designed, developed and produced by Mercedes-Benz; between 1933 and 1936.
M21 Engine
The side-valve six-cylinder engine had a capacity of 1,961 cc which produced a claimed maximum output of at 3,200 rpm. The engine shared its piston stroke length with the smaller 6-cylinder unit fitted in the manufacturer's W15 model, but for the W21 the bore was increased by to . The stated top speed was 98 km/h (61 mph) for the standard length and 95 km/h (59 mph) for the long bodied cars. Power from the engine passed to the rear wheels through a four-speed manual transmission in which the top gear was effectively an overdrive ratio. The top two ratios featured synchromesh. The brakes operated on all four wheels via a hydraulic linkage.
During the model's final year, Mercedes-Benz announced, in June 1936, the option of a more powerful 2,229 cc engine, which was seen as a necessary response to criticism of the car's leisurely performance in long bodied form.
Applications
Mercedes-Benz W21
References
Mercedes-Benz engines
Straight-six engines
Engines by model
Gasoline engines by model | Mercedes-Benz M21 engine | Technology | 272 |
958,988 | https://en.wikipedia.org/wiki/History%20of%20astrology | Astrological belief in relation between celestial observations and terrestrial events have influenced various aspects of human history, including world-views, language and many elements of culture. It has been argued that astrology began as a study as soon as human beings made conscious attempts to measure, record, and predict seasonal changes by reference to astronomical cycles.
Early evidence of such practices appears as markings on bones and cave walls, which show that the lunar cycle was being noted as early as 25,000 years ago; the first step towards recording the Moon's influence upon tides and rivers, and towards organizing a communal calendar. With the Neolithic Revolution new needs were also being met by the increasing knowledge of constellations, whose appearances in the night-time sky change with the seasons, thus allowing the rising of particular star-groups to herald annual floods or seasonal activities. By the 3rd millennium BCE, widespread civilisations had developed sophisticated understanding of celestial cycles, and are believed to have consciously oriented their temples to create alignment with the heliacal risings of the stars.
There is scattered evidence to suggest that the oldest known astrological references are copies of texts made during this period, particularly in Mesopotamia. Two, from the Venus tablet of Ammisaduqa (compiled in Babylon round 1700 BC) are reported to have been made during the reign of king Sargon of Akkad (2334–2279 BC). Another, showing an early use of electional astrology, is ascribed to the reign of the Sumerian ruler Gudea of Lagash (c. 2144–2124 BC). However, there is controversy over whether they were genuinely recorded at the time or merely ascribed to ancient rulers by posterity. The oldest undisputed evidence of the use of astrology as an integrated system of knowledge is attributed to records that emerge from the first dynasty of Mesopotamia (1950–1651 BC).
Among West Eurasian peoples, the earliest evidence for astrology dates from the 3rd millennium BC, with roots in calendrical systems used to predict seasonal shifts and to interpret celestial cycles as signs of divine communications. Until the 17th century, astrology was considered a scholarly tradition, and it helped drive the development of astronomy. It was commonly accepted in political and cultural circles, and some of its concepts were used in other traditional studies, such as alchemy, meteorology and medicine. By the end of the 17th century, emerging scientific concepts in astronomy, such as heliocentrism, undermined the theoretical basis of astrology, which subsequently lost its academic standing and became regarded as a pseudoscience. Empirical scientific investigation has shown that predictions based on these systems are not accurate.
In the 20th century, astrology gained broader consumer popularity through the influence of regular mass media products, such as newspaper horoscopes.
Babylonian astrology
Babylonian astrology is the earliest recorded organized system of astrology, arising in the 2nd millennium BC. There is speculation that astrology of some form appeared in the Sumerian period in the 3rd millennium BC, but the isolated references to ancient celestial omens dated to this period are not considered sufficient evidence to demonstrate an integrated theory of astrology. The history of scholarly celestial divination is therefore generally reported to begin with late Old Babylonian texts (), continuing through the Middle Babylonian and Middle Assyrian periods ().
By the 16th century BC the extensive employment of omen-based astrology can be evidenced in the compilation of a comprehensive reference work known as Enuma Anu Enlil. Its contents consisted of 70 cuneiform tablets comprising 7,000 celestial omens. Texts from this time also refer to an oral tradition – the origin and content of which can only be speculated upon. At this time Babylonian astrology was solely mundane, concerned with the prediction of weather and political matters, and prior to the 7th century BC the practitioners' understanding of astronomy was fairly rudimentary. Astrological symbols likely represented seasonal tasks, and were used as a yearly almanac of listed activities to remind a community to do things appropriate to the season or weather (such as symbols representing times for harvesting, gathering shell-fish, fishing by net or line, sowing crops, collecting or managing water reserves, hunting, and seasonal tasks critical in ensuring the survival of children and young animals for the larger group). By the 4th century, their mathematical methods had progressed enough to calculate future planetary positions with reasonable accuracy, at which point extensive ephemerides began to appear.
Babylonian astrology developed within the context of divination. A collection of 32 tablets with inscribed liver models, dating from about 1875 BC, are the oldest known detailed texts of Babylonian divination, and these demonstrate the same interpretational format as that employed in celestial omen analysis. Blemishes and marks found on the liver of the sacrificial animal were interpreted as symbolic signs which presented messages from the gods to the king.
The gods were also believed to present themselves in the celestial images of the planets or stars with whom they were associated. Evil celestial omens attached to any particular planet were therefore seen as indications of dissatisfaction or disturbance of the god that planet represented. Such indications were met with attempts to appease the god and find manageable ways by which the god's expression could be realised without significant harm to the king and his nation. An astronomical report to the king Esarhaddon concerning a lunar eclipse of January 673 BC shows how the ritualistic use of substitute kings, or substitute events, combined an unquestioning belief in magic and omens with a purely mechanical view that the astrological event must have some kind of correlate within the natural world:
Ulla Koch-Westenholz, in her 1995 book Mesopotamian Astrology, argues that this ambivalence between a theistic and mechanic worldview defines the Babylonian concept of celestial divination as one which, despite its heavy reliance on magic, remains free of implications of targeted punishment with the purpose of revenge, and so "shares some of the defining traits of modern science: it is objective and value-free, it operates according to known rules, and its data are considered universally valid and can be looked up in written tabulations". Koch-Westenholz also establishes the most important distinction between ancient Babylonian astrology and other divinatory disciplines as being that the former was originally exclusively concerned with mundane astrology, being geographically oriented and specifically applied to countries, cities and nations, and almost wholly concerned with the welfare of the state and the king as the governing head of the nation. Mundane astrology is therefore known to be one of the oldest branches of astrology. It was only with the gradual emergence of horoscopic astrology, from the 6th century BC, that astrology developed the techniques and practice of natal astrology.
Hellenistic Egypt
In 525 BC Egypt was conquered by the Persians so there is likely to have been some Mesopotamian influence on Egyptian astrology. Arguing in favour of this, historian Tamsyn Barton gives an example of what appears to be Mesopotamian influence on the Egyptian zodiac, which shared two signs – the Balance and the Scorpion, as evidenced in the Dendera Zodiac (in the Greek version the Balance was known as the Scorpion's Claws).
After the occupation by Alexander the Great in 332 BC, Egypt came under Hellenistic rule and influence. The city of Alexandria was founded by Alexander after the conquest and during the 3rd and 2nd centuries BC, the Ptolemaic scholars of Alexandria were prolific writers. It was in Ptolemaic Alexandria that Babylonian astrology was mixed with the Egyptian tradition of Decanic astrology to create Horoscopic astrology. This contained the Babylonian zodiac with its system of planetary exaltations, the triplicities of the signs and the importance of eclipses. Along with this it incorporated the Egyptian concept of dividing the zodiac into thirty-six decans of ten degrees each, with an emphasis on the rising decan, the Greek system of planetary Gods, sign rulership and four elements.
The decans were a system of time measurement according to the constellations. They were led by the constellation Sothis or Sirius. The risings of the decans in the night were used to divide the night into 'hours'. The rising of a constellation just before sunrise (its heliacal rising) was considered the last hour of the night. Over the course of the year, each constellation rose just before sunrise for ten days. When they became part of the astrology of the Hellenistic Age, each decan was associated with ten degrees of the zodiac. Texts from the 2nd century BC list predictions relating to the positions of planets in zodiac signs at the time of the rising of certain decans, particularly Sothis. The earliest Zodiac found in Egypt dates to the 1st century BC, the Dendera Zodiac.
Particularly important in the development of horoscopic astrology was the Greco-Roman astrologer and astronomer Ptolemy, who lived in Alexandria during Roman Egypt. Ptolemy's work the Tetrabiblos laid the basis of the Western astrological tradition, and as a source of later reference is said to have "enjoyed almost the authority of a Bible among the astrological writers of a thousand years or more". It was one of the first astrological texts to be circulated in Medieval Europe after being translated from Arabic into Latin by Plato of Tivoli (Tiburtinus) in Spain, 1138.
According to Firmicus Maternus (4th century), the system of horoscopic astrology was given early on to an Egyptian pharaoh named Nechepso and his priest Petosiris. The Hermetic texts were also put together during this period and Clement of Alexandria, writing in the Roman era, demonstrates the degree to which astrologers were expected to have knowledge of the texts in his description of Egyptian sacred rites:
This is principally shown by their sacred ceremonial. For first advances the Singer, bearing some one of the symbols of music. For they say that he must learn two of the books of Hermes, the one of which contains the hymns of the gods, the second the regulations for the king's life. And after the Singer advances the Astrologer, with a horologe in his hand, and a palm, the symbols of astrology. He must have the astrological books of Hermes, which are four in number, always in his mouth.
Greece and Rome
The conquest of Asia by Alexander the Great exposed the Greeks to the cultures and cosmological ideas of Syria, Babylon, Persia and central Asia. Greek overtook cuneiform script as the international language of intellectual communication and part of this process was the transmission of astrology from cuneiform to Greek. Sometime around 280 BC, Berossus, a priest of Bel from Babylon, moved to the Greek island of Kos in order to teach astrology and Babylonian culture to the Greeks. With this, what historian Nicholas Campion calls, "the innovative energy" in astrology moved west to the Hellenistic world of Greece and Egypt.
According to Campion, the astrology that arrived from the Eastern World was marked by its complexity, with different forms of astrology emerging. By the 1st century BC two varieties of astrology were in existence, one that required the reading of horoscopes in order to establish precise details about the past, present and future; the other being theurgic (literally meaning 'god-work'), which emphasised the soul's ascent to the stars. While they were not mutually exclusive, the former sought information about the life, while the latter was concerned with personal transformation, where astrology served as a form of dialogue with the Divine.
As with much else, Greek influence played a crucial role in the transmission of astrological theory to Rome. However, our earliest references to demonstrate its arrival in Rome reveal its initial influence upon the lower orders of society, and display concern about uncritical recourse to the ideas of Babylonian 'star-gazers'. Among the Greeks and Romans, Babylonia (also known as Chaldea) became so identified with astrology that 'Chaldean wisdom' came to be a common synonym for divination using planets and stars.
The first definite reference to astrology comes from the work of the orator Cato, who in 160 BC composed a treatise warning farm overseers against consulting with Chaldeans. The 2nd-century Roman poet Juvenal, in his satirical attack on the habits of Roman women, also complains about the pervasive influence of Chaldeans, despite their lowly social status, saying "Still more trusted are the Chaldaeans; every word uttered by the astrologer they will believe has come from Hammon's fountain, ... nowadays no astrologer has credit unless he has been imprisoned in some distant camp, with chains clanking on either arm".
One of the first astrologers to bring Hermetic astrology to Rome was Thrasyllus, who, in the first century AD, acted as the astrologer for the emperor Tiberius. Tiberius was the first emperor reported to have had a court astrologer, although his predecessor Augustus had also used astrology to help legitimise his Imperial rights. In the second century AD, the astrologer Claudius Ptolemy was so obsessed with getting horoscopes accurate that he began the first attempt to make an accurate world map (maps before this were more relativistic or allegorical) so that he could chart the relationship between the person's birthplace and the heavenly bodies. While doing so, he coined the term "geography".
Even though some use of astrology by the emperors appears to have happened, there was also a prohibition on astrology to a certain extent as well. In the 1st century AD, Publius Rufus Anteius was accused of the crime of funding the banished astrologer Pammenes, and requesting his own horoscope and that of then emperor Nero. For this crime, Nero forced Anteius to commit suicide. At this time, astrology was likely to result in charges of magic and treason.
Cicero's De divinatione (44 BC), which rejects astrology and other allegedly divinatory techniques, is a fruitful historical source for the conception of scientificity in Roman classical Antiquity. The Pyrrhonist philosopher Sextus Empiricus compiled the ancient arguments against astrology in his book Against the Astrologers.
Islamic world
Astrology was taken up enthusiastically by Islamic scholars following the collapse of Alexandria to the Arabs in the 7th century, and the founding of the Abbasid empire in the 8th century. The second Abbasid caliph, Al Mansur (754–775) founded the city of Baghdad to act as a centre of learning, and included in its design a library-translation centre known as Bayt al-Hikma 'Storehouse of Wisdom', which continued to receive development from his heirs and was to provide a major impetus for Arabic translations of Hellenistic astrological texts. The early translators included Mashallah, who helped to elect the time for the foundation of Baghdad, and Sahl ibn Bishr (a.k.a. Zael), whose texts were directly influential upon later European astrologers such as Guido Bonatti in the 13th century, and William Lilly in the 17th century. Knowledge of Arabic texts started to become imported into Europe during the Latin translations of the 12th century.
In the 9th century, Persian astrologer Albumasar was thought to be one of the greatest astrologer at that time. His practical manuals for training astrologers profoundly influenced Muslim intellectual history and, through translations, that of western Europe and Byzantium In the 10th century. Albumasar's Introductorium in Astronomiam was one of the most important sources for the recovery of Aristotle for medieval European scholars. Another was the Persian mathematician, astronomer, astrologer and geographer Al Khwarizmi. The Arabs greatly increased the knowledge of astronomy, and many of the star names that are commonly known today, such as Aldebaran, Altair, Betelgeuse, Rigel and Vega retain the legacy of their language. They also developed the list of Hellenistic lots to the extent that they became historically known as Arabic parts, for which reason it is often wrongly claimed that the Arabic astrologers invented their use, whereas they are clearly known to have been an important feature of Hellenistic astrology.
During the advance of Islamic science some of the practices of astrology were refuted on theological grounds by astronomers such as Al-Farabi (Alpharabius), Ibn al-Haytham (Alhazen) and Avicenna. Their criticisms argued that the methods of astrologers were conjectural rather than empirical, and conflicted with orthodox religious views of Islamic scholars through the suggestion that the Will of God can be precisely known and predicted in advance. Such refutations mainly concerned 'judicial branches' (such as horary astrology), rather than the more 'natural branches' such as medical and meteorological astrology, these being seen as part of the natural sciences of the time.
For example, Avicenna's 'Refutation against astrology' Resāla fī ebṭāl aḥkām al-nojūm, argues against the practice of astrology while supporting the principle of planets acting as the agents of divine causation which express God's absolute power over creation. Avicenna considered that the movement of the planets influenced life on earth in a deterministic way, but argued against the capability of determining the exact influence of the stars. In essence, Avicenna did not refute the essential dogma of astrology, but denied our ability to understand it to the extent that precise and fatalistic predictions could be made from it.
Medieval and Renaissance Europe
While astrology in the East flourished following the break up of the Roman world, with Indian, Persian and Islamic influences coming together and undergoing intellectual review through an active investment in translation projects, Western astrology in the same period had become "fragmented and unsophisticated ... partly due to the loss of Greek scientific astronomy and partly due to condemnations by the Church."
Translations of Arabic works into Latin started to make their way to Spain by the late 10th century, and in the 12th century the transmission of astrological works from Arabia to Europe "acquired great impetus".
By the 13th century astrology had become a part of everyday medical practice in Europe. Doctors combined Galenic medicine (inherited from the Greek physiologist Galen - AD 129–216) with studies of the stars. By the end of the 1500s, physicians across Europe were required by law to calculate the position of the Moon before carrying out complicated medical procedures, such as surgery or bleeding.
Influential works of the 13th century include those of the British monk Johannes de Sacrobosco ( 1195–1256) and the Italian astrologer Guido Bonatti from Forlì (Italy). Bonatti served the communal governments of Florence, Siena and Forlì and acted as advisor to Frederick II, Holy Roman Emperor. His astrological text-book Liber Astronomiae ('Book of Astronomy'), written around 1277, was reputed to be "the most important astrological work produced in Latin in the 13th century". Dante Alighieri immortalised Bonatti in his Divine Comedy (early 14th century) by placing him in the eighth Circle of Hell, a place where those who would divine the future are forced to have their heads turned around (to look backwards instead of forwards).
In medieval Europe, a university education was divided into seven distinct areas, each represented by a particular planet and known as the seven liberal arts. Dante attributed these arts to the planets. As the arts were seen as operating in ascending order, so were the planets in decreasing order of planetary speed: grammar was assigned to the Moon, the quickest moving celestial body, dialectic was assigned to Mercury, rhetoric to Venus, music to the Sun, arithmetic to Mars, geometry to Jupiter and astrology/astronomy to the slowest moving body, Saturn.
Medieval writers used astrological symbolism in their literary themes. For example, Dante's Divine Comedy builds varied references to planetary associations within his described architecture of Hell, Purgatory and Paradise, (such as the seven layers of Purgatory's mountain purging the seven cardinal sins that correspond to astrology's seven classical planets). Similar astrological allegories and planetary themes are pursued through the works of Geoffrey Chaucer.
Chaucer's astrological passages are particularly frequent and knowledge of astrological basics is often assumed through his work. He knew enough of his period's astrology and astronomy to write a Treatise on the Astrolabe for his son. He pinpoints the early spring season of the Canterbury Tales in the opening verses of the prologue by noting that the Sun "hath in the Ram his halfe cours yronne". He makes the Wife of Bath refer to "sturdy hardiness" as an attribute of Mars, and associates Mercury with "clerkes". In the early modern period, astrological references are also to be found in the works of William Shakespeare and John Milton.
One of the earliest English astrologers to leave details of his practice was Richard Trewythian (b. 1393). His notebook demonstrates that he had a wide range of clients, from all walks of life, and indicates that engagement with astrology in 15th-century England was not confined to those within learned, theological or political circles.
During the Renaissance, court astrologers would complement their use of horoscopes with astronomical observations and discoveries. Many individuals now credited with having overturned the old astrological order, such as Tycho Brahe, Galileo Galilei and Johannes Kepler, were themselves practicing astrologers.
At the end of the Renaissance the confidence placed in astrology diminished, with the breakdown of Aristotelian Physics and rejection of the distinction between the celestial and sublunar realms, which had historically acted as the foundation of astrological theory. Keith Thomas writes that although heliocentrism is consistent with astrology theory, 16th and 17th century astronomical advances meant that "the world could no longer be envisaged as a compact inter-locking organism; it was now a mechanism of infinite dimensions, from which the hierarchical subordination of earth to heaven had irrefutably disappeared". Initially, amongst the astronomers of the time, "scarcely anyone attempted a serious refutation in the light of the new principles" and in fact astronomers "were reluctant to give up the emotional satisfaction provided by a coherent and interrelated universe". By the 18th century the intellectual investment which had previously maintained astrology's standing was largely abandoned. Historian of science Ann Geneva writes:
India
The earliest recorded use of astrology in India is recorded during the Vedic period. Astrology, or jyotiṣa is listed as a Vedanga, or branch of the Vedas of the Vedic religion. The only work of this class to have survived is the Vedanga Jyotisha, which contains rules for tracking the motions of the sun and the moon in the context of a five-year intercalation cycle. The date of this work is uncertain, as its late style of language and composition, consistent with the last centuries BC, albeit pre-Mauryan, conflicts with some internal evidence of a much earlier date in the 2nd millennium BC. Indian astronomy and astrology developed together. The earliest treatise on Jyotisha, the Bhrigu Samhita, was compiled by the sage Bhrigu during the Vedic era. The sage Bhirgu is also called the 'Father of Hindu Astrology', and is one of the venerated Saptarishi or seven Vedic sages. The Saptarishis are also symbolized by the seven main stars in the Ursa Major constellation.
The documented history of Jyotisha in the subsequent newer sense of modern horoscopic astrology is associated with the interaction of Indian and Hellenistic cultures through the Greco-Bactrian and Indo-Greek Kingdoms. The oldest surviving treatises, such as the Yavanajataka or the Brihat-Samhita, date to the early centuries AD. The oldest astrological treatise in Sanskrit is the Yavanajataka ("Sayings of the Greeks"), a versification by Sphujidhvaja in 269/270 AD of a now lost translation of a Greek treatise by Yavanesvara during the 2nd century AD under the patronage of the Indo-Scythian king Rudradaman I of the Western Satraps.
Written on pages of tree bark, the Samhita (Compilation) is said to contain five million horoscopes comprising all who have lived in the past or will live in the future. The first named authors writing treatises on astronomy are from the 5th century AD, the date when the classical period of Indian astronomy can be said to begin. Besides the theories of Aryabhata in the Aryabhatiya and the lost Arya-siddhānta, there is the Pancha-Siddhāntika of Varahamihira.
China
The Chinese astrological system is based on native astronomy and calendars, and its significant development is tied to that of native astronomy, which came to flourish during the Han dynasty (2nd century BC – 2nd century AD).
Chinese astrology has a close relation with Chinese philosophy (theory of three harmonies: heaven, earth and water) and uses the principles of yin and yang, and concepts that are not found in Western astrology, such as the wu xing teachings, the 10 Celestial stems, the 12 Earthly Branches, the lunisolar calendar (moon calendar and sun calendar), and the time calculation after year, month, day and shichen (時辰).
Astrology was traditionally regarded highly in China, and Confucius is said to have treated astrology with respect saying: "Heaven sends down its good or evil symbols and wise men act accordingly". The 60-year cycle combining the five elements with the twelve animal signs of the zodiac has been documented in China since at least the time of the Shang (Shing or Yin) dynasty (c. 1766 BC – c. 1050 BC). Oracle bones have been found dating from that period with the date according to the 60-year cycle inscribed on them, along with the name of the diviner and the topic being divined. Astrologer Tsou Yen lived around 300 BC, and wrote: "When some new dynasty is going to arise, heaven exhibits auspicious signs for the people".
There is debate as to whether the Babylonian astrology influenced early development of Chinese astrology. Later in the 6th century, the translation of the Mahāsaṃnipāta Sūtra brought the Babylonian system to China. Though it did not displace Chinese astrology, it was referenced in several poems.
Mesoamerica
The calendars of Pre-Columbian Mesoamerica are based upon a system which had been in common use throughout the region, dating back to at least the 6th century BC. The earliest calendars were employed by peoples such as the Zapotecs and Olmecs, and later by such peoples as the Maya, Mixtec and Aztecs. Although the Mesoamerican calendar did not originate with the Maya, their subsequent extensions and refinements to it were the most sophisticated. Along with those of the Aztecs, the Maya calendars are the best-documented and most completely understood.
The distinctive Mayan calendar used two main systems, one plotting the solar year of 360 days, which governed the planting of crops and other domestic matters; the other called the Tzolkin of 260 days, which governed ritual use. Each was linked to an elaborate astrological system to cover every facet of life. On the fifth day after the birth of a boy, the Mayan astrologer-priests would cast his horoscope to see what his profession was to be: soldier, priest, civil servant or sacrificial victim. A 584-day Venus cycle was also maintained, which tracked the appearance and conjunctions of Venus. Venus was seen as a generally inauspicious and baleful influence, and Mayan rulers often planned the beginning of warfare to coincide with when Venus rose. There is evidence that the Maya also tracked the movements of Mercury, Mars and Jupiter, and possessed a zodiac of some kind. The Mayan name for the constellation Scorpio was also 'scorpion', while the name of the constellation Gemini was 'peccary'. There is some evidence for other constellations being named after various beasts. The most famous Mayan astrological observatory still intact is the Caracol observatory in the ancient Mayan city of Chichen Itza in modern-day Mexico.
The Aztec calendar shares the same basic structure as the Mayan calendar, with two main cycles of 360 days and 260 days. The 260-day calendar was called Tonalpohualli and was used primarily for divinatory purposes. Like the Mayan calendar, these two cycles formed a 52-year 'century', sometimes called the Calendar Round.
See also
Astrology and science
Classical planets in Western alchemy
Jewish views on astrology
List of astrological traditions, types, and systems
Worship of heavenly bodies
Notes
Sources
Nicholas Campion, A History of Western Astrology Vol. 2, The Medieval and Modern Worlds, Continuum 2009. .
.
(PDF version)
Further reading
External links
Astrology
Obsolete scientific theories | History of astrology | Astronomy | 6,047 |
12,337,275 | https://en.wikipedia.org/wiki/Flow-accelerated%20corrosion | Flow-accelerated corrosion (FAC), also known as flow-assisted corrosion, is a corrosion mechanism in which a normally protective oxide layer on a metal surface dissolves in a fast flowing water. The underlying metal corrodes to re-create the oxide, and thus the metal loss continues.
By definition, the rate of FAC depends on the flow velocity. FAC often affects carbon steel piping carrying ultra-pure, deoxygenated water or wet steam. Stainless steel does not suffer from FAC. FAC of carbon steel halts in the presence of small amount of oxygen dissolved in water. FAC rates rapidly decrease with increasing water pH.
FAC has to be distinguished from erosion corrosion because the fundamental mechanisms for the two corrosion modes are different. FAC does not involve impingement of particles, bubbles, or cavitation which cause the mechanical (often crater-like) wear on the surface. By contrast to mechanical erosion, FAC involves dissolution of normally poorly soluble oxide by combined electrochemical, water chemistry and mass-transfer phenomena. Nevertheless, the terms FAC and erosion are sometimes used interchangeably because the actual mechanism may, in some cases, be unclear.
FAC was the cause of several high-profile accidents in power plants, for example, a rupture of a high-pressure condensate line in Virginia Power's Surry nuclear plant in 1986, that resulted in four fatalities and four injuries.
See also
Erosion Corrosion of Copper Water Tubes
Oxygenated treatment
References
Further reading
"Flow Accelerated Corrosion is Still With Us...," 2008 By Dave Daniels, M&M Engineering
"Flow Accelerated Corrosion Evaluation, A Case Study " 2008 By Jon McFarlen, M&M Engineering
Corrosion | Flow-accelerated corrosion | Chemistry,Materials_science | 354 |
1,363,716 | https://en.wikipedia.org/wiki/Allianz%20Arena | Allianz Arena (; known as Munich Football Arena for UEFA competitions) is a football stadium in Munich, Bavaria, Germany, with a 70,000 seating capacity for international matches and 75,000 for domestic matches. Widely known for its exterior of inflated ETFE plastic panels, it is the first stadium in the world with a full colour changing exterior. Located at Werner-Heisenberg-Allee 25 at the northern edge of Munich's Schwabing-Freimann borough on the Fröttmaning Heath, it is the second-largest stadium in Germany behind the Westfalenstadion in Dortmund.
Bayern Munich have played their home games at the Allianz Arena since the start of the 2005–06 season. The club had previously played their home games at the Munich Olympic Stadium since 1972. 1860 Munich previously had a 50 per cent share in the stadium, but, in 2006, sold this to Bayern for €11m to help resolve a serious financial crisis that saw 1860 facing bankruptcy. The arrangement allowed 1860 Munich to play at the stadium while retaining no ownership until 2025. However, in July 2017 Bayern terminated the rental contract with 1860, making themselves the sole tenants of the stadium.
The large locally based financial services provider Allianz purchased the naming rights to the stadium for 30 years. However, this name cannot be used when hosting FIFA and UEFA events, since these governing bodies have policies forbidding corporate sponsorship from companies that are not official tournament partners. During the 2006 FIFA World Cup, the stadium was referred to as FIFA WM-Stadion München (FIFA World Cup Stadium, Munich). In UEFA club, Nations League and international matches, it is known as the Fußball Arena München (Football Arena Munich), and it hosted the 2012 UEFA Champions League Final and will host the upcoming 2025 final, moved from 2023 as well as matches during UEFA Euro 2024. Since 2012, the museum of Bayern Munich, FC Bayern Erlebniswelt, has been located inside the Allianz Arena.
In 2022, it hosted a first regular season National Football League (NFL) American football game played in Germany as part of the NFL International Series.
Design
Capacity
Effective with the city's approval of modifications that was granted 16 January 2006, the legal capacity of the stadium increased from 69,000 to 71,000 spectators (including standing room). The lower tier can seat up to 20,000, the middle tier up to 24,000, and the upper tier up to 22,000. 10,400 of the seats in the lower tier corners can be converted to standing room to allow an additional 3,120 spectators. The total capacity includes 2,000 business seats, 400 seats for the press, 106 luxury boxes with seating for up to 174, and 165 berths for wheelchairs and the like. From the second half of the 2005–06 Bundesliga season, the arena is able to accommodate 69,901 spectators at league and DFB-Pokal games, but because of UEFA regulations, the capacity remained at 66,000 seats for UEFA Champions League and UEFA Cup games. Bayern Munich limited capacity during their league and cup games to 69,000. The partial roof covers all seats, although winds can still blow rain onto some of them. Prior to the 2012–13 season, Bayern Munich announced that capacity had been increased to 71,000 for domestic matches and 68,000 for UEFA matches, with the addition of 2,000 seats in the upper tier of the arena.
Allianz Arena also offers three-day-care centres and a fan shop, the FC Bayern Munich Megastore. Merchandise is offered at stands all along the inside of the exterior wall inside the area behind the seats. Numerous restaurants and fast-food establishments are also located around the stadium.
There are four team locker rooms (one each for the two home teams and their respective opponents), four coaches' locker rooms, and two locker rooms for referees. Two areas are provided where athletes can warm up (approx. 110 m2 each). There are also 550 toilets and 190 monitors in the arena.
On 28 April 2013, FC Bayern announced it would be selling 300 more tickets in the Südkurve starting with the 2013–14 Bundesliga season.
On 21 January 2014, Karl-Heinz Rummenigge declared that FC Bayern was discussing a further expansion of the Allianz Arena. About 2,000 new seats were to be installed in the upper tier and about 2,000 more tickets in the Nord- and Südkurve. In August 2014, it was reported that the capacity expansion was completed leading to a new maximum capacity of 75,024 in the Bundesliga and 69,334 in international matches. An expansion was approved in January 2015 to expand the stadium's capacity to 75,000 for Bundesliga Games and 70,000 for games in the Champions League.
Construction
The stadium construction began on 21 October 2002 and it was officially opened on 30 May 2005. The primary designers are architects Herzog & de Meuron. The stadium is designed so that the main entrance to the stadium would be from an elevated esplanade separated from the parking space consisting of Europe's biggest underground car park. The roof of the stadium has built-in roller blinds which may be drawn back and forth during games to provide protection from the sun.
Total concrete used during stadium construction: 120,000 m3
Total concrete used for the parking garage: 85,000 m3
Total steel used during stadium construction: 22,000 tonnes
Total steel used for the parking garage: 14,000 tonnes
Luminous exterior
The arena facade is constructed of 2,760 ETFE-foil air panels that are kept inflated with dry air to a differential pressure of 3.5 Pa. The panels appear white from far away but when examined closely, there are little dots on the panels. When viewed from far away, the eye combines the dots and sees white. When viewed close up however, it is possible to see through the foil. The foil has a thickness of 0.2 mm. Each panel can be independently lit with white, red, or blue light. The panels are lit for each game with the colours of the respective home team—red for Bayern Munich, blue for TSV and white for the Germany national team. White is also used when the stadium is a neutral venue, like the 2012 UEFA Champions League Final. Other colours or multicolour or interchanging lighting schemes are theoretically possible, but the Munich Police strongly insists on using a single-colour lighting scheme due to several car accidents on the nearby A9 Autobahn with drivers being distracted by the changing lights.
Allianz Arena's innovative stadium-facade lighting concept has been subsequently adopted in other recently built venues, like MetLife Stadium in New Jersey, which lights up in blue for the National Football League's Giants and green for the Jets. With electricity costs for the light of about €50 (USD$75) per hour, the construction emits enough light that, on clear nights, the stadium can easily be spotted from Austrian mountain tops, e.g. from a distance of 50 miles (80 km).
Transport
Patrons may park their cars in Europe's largest parking structure, comprising four four-storey parking garages with 9,800 parking places. In addition, 1,200 places were built into the first two tiers of the arena, 350 places are available for buses (240 at the north end, and 110 at the south entrance), and 130 more spots are reserved for those with disabilities.
The stadium is located next to the Fröttmaning U-Bahn station. This is on the U6 line of the Munich U-Bahn.
Surroundings
From the subway station just south of the arena, visitors approach the stadium through a park that was designed to disentangle and guide them to the entrance. An esplanade rises gradually from ground level at the subway station entrance, practically building the parking garage's cover, to the entrance level of the stadium. On the other side of the Autobahn, the Fröttmaning Hill with its windmill affords a marvellous view on the stadium. Also the Romanesque Heilig-Kreuz-Kirche, the oldest structure on the area of the City of Munich designed to serve religious purposes, is located there together with its copy, an artwork in concrete as a reminder for the village of Fröttmaning which disappeared with the construction of the Autobahn.
Owners
The arena was commissioned by the Allianz Arena München Stadion GmbH, founded in 2001, and was owned in equal parts by the two football clubs that called it home. The GmbH's CEO was Karl-Heinz Wildmoser Jr. until the unravelling of the stadium corruption affair (see below). Since then, Bernd Rauch, Peter Kerspe, and Walter Leidecker have led the company. In April 2006, FC Bayern Munich bought out TSV 1860 Munich's 50 per cent share in the arena for a reported €11 million. 1860 managing director Stefan Ziffzer stated that the deal prevented insolvency for the club. The terms of the agreement gave 1860 the right to buy back their 50 per cent share of the arena for the price of sale plus interest anytime before June 2010. In November 2007, 1860 Munich resigned that right. In advance, the income of two friendly-games both clubs shared equally instead of having that money going to Allianz Arena GmbH. Due to 1860 Munich's financial turbulence, Bayern Munich took over all the shares and owns 100 per cent of the Allianz Arena.
Name
Allianz paid significant sums for the right to lend its name to the stadium for a duration of 30 years. However, Allianz is not a sponsor for UEFA and FIFA competitions, and for this reason, the logo is covered during Champions League games, and was removed during the 2006 FIFA World Cup and UEFA Euro 2024.
Cost
The cost of the construction itself ran to €286 million, but financing costs raised that figure to a total of €340 million. In addition, the city and State incurred approximately €210 million for area development and infrastructure improvements.
History
On 21 October 2002, voters went to the polls to determine whether a new stadium should be built in this location and whether the city of Munich should provide the necessary infrastructure. About two-thirds of the voters decided in favor of the proposition. An alternative to constructing the new arena had been a major reconstruction of the Olympic Stadium but this option had been refused by its architect Günther Behnisch.
Swiss architect firm Herzog & de Meuron then developed the concept of the stadium with a see-through exterior made of ETFE-foil panels that can be lit from the inside and are self-cleaning. Construction started in late 2002 and was completed by the end of April 2005.
The Fröttmaning and Marienplatz stations of the subway line U6 were expanded and improved in conjunction with the arena construction. The Fröttmaning subway stations platforms were moved slightly southwards and expanded from two to four tracks, while the Marienplatz U-Bahn station was outfitted with additional pedestrian connector tunnels running parallel to the subway tracks, which lead towards the S-Bahn portion of the station, lessening congestion among passengers making connections to the Munich S-Bahn. To be able to handle the additional traffic load, the Autobahn A9 was expanded to three and four lanes in each direction and another exit was added to the A99 north of the arena.
On 19 May 2012, the 2011–12 UEFA Champions League final was held at the Allianz Arena. Bayern Munich, who were drawn as home team, was set to play against Chelsea. Chelsea won on penalties after the game had tied 1–1 after regulation and extra time. Bastian Schweinsteiger's penalty hit Petr Čech's left post, and Didier Drogba scored the winning penalty. On 25 May 2012, Bayern opened a museum about its history, FC Bayern Erlebniswelt, inside the Allianz Arena.
Following the departure of TSV 1860 Munich from the stadium due to its relegation to 3. Liga in 2016–17, Bayern Munich went on to give Allianz Arena a significant facelift a year later, replacing old grey seats with new ones that create a combination of red and white, the colours of the club. The stadium now presents FC Bayern crest on the stand, the "FC Bayern München" writing on one side and the "Mia San Mia" – the club's motto – on the other. Furthermore, several other modifications have also been made, including decorating walls with images of the club's history, bringing a larger quantity of red, and the opening of the FC Bayern store.
On 19 September 2024, it was announced that the stadium's address would be changed to "Franz Beckenbauer Platz 5" in honour of the late German football legend Franz Beckenbauer, who was born in Munich and had played with Bayern Munich from 1964–1977. The change will take effect in January 2025, before that year's Champions League final which the stadium will host.
Stadium corruption affair
Between March 2004 and August 2006, a corruption affair relating to the stadium occupied the football world and German courts. On 9 March 2004, Karl-Heinz Wildmoser Sr., president of TSV 1860 Munich, his son Karl-Heinz Wildmoser Jr., chief executive officer of Allianz Arena München Stadion GmbH, and two others were charged with corruption in connection with the award of arena construction contracts and taken into custody. On 12 March, Wildmoser Sr. struck a plea bargain and was released. As part of the plea bargain, he relinquished the presidency of the club three days later, and on 18 May, the investigation into his conduct was closed.
His son, Karl-Heinz Wildmoser Jr., remained in custody. At a bail hearing on 29 June, the judge refused bail on the grounds of danger of flight and obstruction of justice. The District Attorney filed charges on 23 August 2004, accusing him of fraud, corruption and tax evasion. The case was that Wildmoser Jr. had awarded the construction contract at an inflated price, provided the Austrian builder Alpine with inside information that enabled the builder to win the contract, and in return received €2.8 million.
On 13 May 2005, Karl-Heinz Wildmoser Jr. was convicted and sentenced by a Munich court to four and a half years in prison. He was released on bail pending his appeal. The Federal Court of Justice rejected the appeal in August 2006.
Opening day
On 30 May 2005, 1860 Munich played an exhibition game against 1. FC Nürnberg and won, 3–2. The next day, the record German champions Bayern Munich played a game against the Germany national team. Both games had been sold out since early March 2005. Patrick Milchraum of TSV 1860 scored the first official goal at the stadium.
On 2 June 2005, in response to high demand, the first "arena derby" took place between the two tenants. That game was won by TSV 1860 with the help of a goal by Paul Agostino.
Prior to opening day, the alumni teams of both clubs played each other in an exhibition game in front of a crowd of 30,000. During the game, all stadium functions were thoroughly tested.
The stadium's first goal in a competitive game went to Roy Makaay of FC Bayern in the semi-finals of 2005 DFL-Ligapokal on 26 July 2005. In the same game, Thomas Hitzlsperger of VfB Stuttgart scored the first goal in an official game by a visiting team. The game ended with a 2–1 win for Stuttgart.
The first goal in a league game was scored by Owen Hargreaves of FC Bayern when the home team won 3–0 in its 2005–06 Bundesliga season opener against Borussia Mönchengladbach on 5 August 2005. The first goal in a league game by a visiting team was scored by Dynamo Dresden on 9 September 2005 in the 2. Bundesliga match against 1860 Munich. That game ended in a score of 1–2 in front of a full house, which included approximately 20,000 – 22,000 fans who had traveled to Munich from Dresden for the game. Dresden thus became the first visiting team to win a league game at Allianz Arena.
The first goal against FC Bayern Munich in a league game at Allianz Arena was scored by Miroslav Klose of Werder Bremen on 5 November 2005 in the first minute of play. This was to remain the visitors' only goal that day, as the game went to the FC Bayern, with a final score of 3–1.
FC Bayern broke its consecutive sell-out record by selling out each of its first ten home games at Allianz Arena.
International tournament matches
UEFA Champions League finals
2006 FIFA World Cup
The stadium was one of the venues for the 2006 FIFA World Cup. However, due to sponsorship contracts, the arena was called FIFA World Cup Stadium Munich during the World Cup.
The following games were played at the stadium during the World Cup of 2006:
UEFA Euro 2020
UEFA Euro 2024
The stadium hosted four group stage matches (including the opening match), one match in the round of 16 and one semi-final match at the UEFA Euro 2024.
Other uses
American football
On 9 February 2022, it was announced that the Allianz Arena would host a regular-season game between the Seattle Seahawks versus the Tampa Bay Buccaneers as part of the NFL International Series. The Buccaneers, who were the designated home team, defeated the Seahawks 21–16 in front of 69,811 fans in the first regular-season National Football League game played in Germany. Allianz Arena hosted its second NFL International Series regular-season game in on 10 November 2024. The Carolina Panthers defeated the New York Giants 20–17 in overtime in front of 70,132 fans.
See also
List of stadiums
NFL International Series
References
External links
Official website of Allianz Arena Facts and Figures Section provides details like the amount of concrete used, composition of the facade, facade lighting etc.
Seat Plan of the Allianz Arena
Allianz Arena – video
Allianz Arena Guide and Images
2005 establishments in Germany
Allianz
FC Bayern Munich
Herzog & de Meuron buildings
High-tech architecture
Lattice shell structures
Modernist architecture in Germany
Postmodern architecture
Sports venues completed in 2005
Football venues in Munich
Tourist attractions in Munich
TSV 1860 Munich
National Football League venues
Sports venues in Bavaria | Allianz Arena | Engineering | 3,783 |
56,398,428 | https://en.wikipedia.org/wiki/List%20of%20Olmsted%20works | The landscape architecture firm of Frederick Law Olmsted, and later of his sons John Charles Olmsted and Frederick Law Olmsted Jr. (known as the Olmsted Brothers), produced designs and plans for hundreds of parks, campuses and other projects throughout the United States and Canada. Together, these works totaled 355. This is a non-exhaustive list of those projects.
Frederick Law Olmsted Sr.
Academic campuses
Frederick Law Olmsted Sr. designed numerous school and college campuses between 1857 and 1895. Some of the most famous done while he headed his firm are listed here. Projects continuing past Olmsted's retirement in 1895 were completed by his sons, the Olmsted Brothers.
American University Main Campus, Washington, D.C.
Berwick Academy, South Berwick, Maine (1894)
Bryn Mawr College, Bryn Mawr, Pennsylvania (1885)
Cornell University, Ithaca, New York (1867–1873)
Fairleigh Dickinson University, Madison, New Jersey
Gallaudet University, Washington, D.C. (1866)
Groton School, Groton, Massachusetts (1884–1904)
Lawrenceville School, Lawrenceville, New Jersey (1883–1901)
Manhattanville College, Purchase, New York
Mount Holyoke College, South Hadley, Massachusetts
Noble and Greenough School, Dedham, Massachusetts
Phillips Academy, Andover, Massachusetts (1891–1965)
Pomfret School, Pomfret, Connecticut
St. Albans School (Washington, D.C.)
Smith College, Northampton, Massachusetts (1891–1909)
The Southern Baptist Theological Seminary, Louisville, KY
Stanford University, Palo Alto, California, Main Quad (1887–1906) and campus master plan (1886–1914)
Trinity College, Hartford, Connecticut (1872–1894)
University of California, Berkeley, Berkeley, California, master plan (1865)
University of Chicago, Chicago, Illinois
University of Maine, Orono, Maine
University of Rochester, Rochester, New York
Washington University in St. Louis, St. Louis, Missouri (1865–1899)
Wellesley College, Wellesley, Massachusetts
Yale University, New Haven, Connecticut (1874–1881)
Selected private and civic designs
By Frederick Law Olmsted Sr.:
Olmsted Brothers
After the retirement of Frederick Law Olmsted Sr in 1895, the firm was managed by John Charles Olmsted and Frederick Law Olmsted Jr., as Olmsted and Olmsted, Olmsted Olmsted and Eliot, and Olmsted Brothers. Works from this period, which spanned from 1895 to 1950, are often misattributed to Frederick Sr. They include:
Academic campuses
Alabama A&M University, Normal, Alabama
Bryn Mawr College, Bryn Mawr, Pennsylvania (1895–1927)
Chatham University, Pittsburgh, Pennsylvania
Denison University, Granville, Ohio (1916)
Eastern Kentucky University, Richmond, Kentucky
Fisk University, Nashville, Tennessee (1929-1933)
Florence State Teachers College, Florence, Alabama (University of North Alabama)
Grove City College, Grove City, Pennsylvania (1929)
Harvard Business School, Allston, Massachusetts (1925–1931)
Haverford College, Haverford, Pennsylvania (1925–1932)*
Huntingdon College campus, Montgomery, Alabama
Indiana University, Bloomington, Indiana (1929–1936)
Iowa State University Ames, Iowa (1906)
Johns Hopkins University, Baltimore, Maryland (1903–1919)
Lafayette College, Easton, Pennsylvania (1909)
Lincoln Institute, Lincoln Ridge, Kentucky (1911)
Louisiana State University, Baton Rouge, Louisiana
Morehead State University, Morehead, Kentucky (1923)
Middlesex School, Concord, Massachusetts (1901)
Mount Holyoke College, South Hadley, Massachusetts (1896–1922)
Newton Country Day School, Newton, Massachusetts (1927)
Oberlin College, Oberlin, Ohio (1903)
Ohio State University, Columbus, Ohio (1909)
Oregon State University, Corvallis, Oregon (1909)
Roslyn High School, Roslyn, New York (1920s)
Saint Joseph College, West Hartford, Connecticut
Samford University, Homewood, Alabama
Stanford University, Stanford, California (1886–1914)
Troy University, Troy, Alabama
Tufts University, Medford, Massachusetts (1920)
University of Chicago, Chicago, Illinois (1901–1910)
University of Florida, Gainesville, Florida (1925)
University of Idaho, Moscow, Idaho (1908)
University of Montevallo, Montevallo, Alabama
University of Maine, Orono, Maine (1932)
University of Notre Dame, Notre Dame, Indiana (1929–1932)
University of Rhode Island, Kingston, Rhode Island (1894–1903)
University of Washington, Seattle, Washington (1902–1920)
Vassar College, Poughkeepsie, New York (1896–1932)
Western Michigan University Main Campus, Kalamazoo, Michigan (1904)
Williams College, Williamstown, Massachusetts (1902–1912)
Selected private and civic designs
By Olmsted and Olmsted, Olmsted Olmsted and Eliot, and Olmsted Brothers:
Adair Country Inn gardens, Bethlehem, New Hampshire
Audubon Park, New Orleans, Louisiana
Ashland Park, residential neighborhood built around Ashland, The Henry Clay Estate in Lexington, Kentucky
Bloomfield, Villanova, PA. Private house of George McFadden.
Branch Brook Park, Newark, New Jersey
The British Properties, Vancouver, British Columbia, Canada
Brookdale Park, Bloomfield & Montclair, New Jersey
Cambridge American Cemetery and Memorial a memorial for American World War II servicemen in Cambridgeshire, near Cambridge, England
Caracas Country Club (1928), Alta Florida, Capital District, Caracas, Venezuela
Carroll Park, Baltimore, Maryland
Cedar Brook Park, Shakespeare Garden, Plainfield, New Jersey
Cleveland Metroparks System, in the Greater Cleveland area, Ohio
Craig Colony for Epileptics, Sonyea, New York
Crocker Field, Fitchburg, Massachusetts
Deering Oaks, Portland, Maine
The Gardens at Dey Mansion Washington's Headquarters, Wayne, New Jersey
Druid Hills, Atlanta, Georgia
Dunn Gardens, Seattle, Washington
Eastern Promenade, Portland, Maine
Elm Bank Horticulture Center, Wellesley, Massachusetts
Fairmont Park, Riverside, California
First Presbyterian Church of Far Rockaway, Queens, New York
Fort Tryon Park, New York City
Franklin Delano Roosevelt Park, Philadelphia, Pennsylvania (originally League Island Park)
Fresh Pond, Cambridge, Massachusetts
Garret Mountain Reservation, Woodland Park, New Jersey
Goffle Brook Park, Hawthorne, New Jersey
Grover Cleveland Park, Caldwell, New Jersey
Hermann Dudley Murphy House, Lexington, Massachusetts
High Point Park, Montague, New Jersey
High Rock Reservation, a park in Lynn, Massachusetts
Homelands Neighborhood, Springfield, Massachusetts
"New" Katonah, Katonah, New York
Kentucky State Capitol Grounds, Frankfort, Kentucky
Kohler (Village of), Wisconsin
Kykuit gardens, Rockefeller family estate, Mount Pleasant (from 1897 but largely revised by later architects)
Leimert Park Neighborhood, Los Angeles
Locust Valley Cemetery, Locust Valley, New York
Metro Parks, Summit County, Ohio
Manito Park and Botanical Gardens, Spokane, Washington
Marconi Plaza (originally Oregon Plaza)
Marquette Park, Chicago, Illinois
Memorial Park (Jacksonville), Florida
Memorial Park, Maplewood, New Jersey
Mill Creek Park, Youngstown, Ohio
Munsey Park, New York
North Park, Fall River, Massachusetts, 1901
Otto Kahn Estate, Cold Spring Hills, New York
Oldfields-Lilly House and Gardens, a National Historic Landmark, originally Hugh Landon estate (Olmsted job # 6883 1920–1927) , Indianapolis, Indiana
Passaic County Parks System
Piedmont Park, Atlanta, Georgia
Pittsburgh downtown ("industrial district") and thoroughfares , 1909
Planting Fields, Oyster Bay, Long Island, New York
Pope Park (Hartford, Connecticut)
The Portland park plan, Portland, Oregon
Plan for Los Angeles Region, with Harland Bartholomew & Associates (1930)
Preakness Valley Park, Wayne, New Jersey
Prouty Garden, Boston Children's Hospital, Boston This garden is at risk of being destroyed for redevelopment purposes.
Pulaski Park, Holyoke, Massachusetts
Rahway River Parkway Union County, New Jersey
Riverside Park, Hartford, Connecticut
Rancho Los Alamitos Gardens, Long Beach, California
Riverbend, Walter J. Kohler, Sr. estate grounds, Kohler, Wisconsin
Seattle Park System
Southern Boulevard Parkway (Philadelphia, Pennsylvania)
South Mountain Reservation, Maplewood, Millburn, South Orange, West Orange, New Jersey
South Park (now Kennedy Park), Fall River, Massachusetts, 1904
Spokane, Washington city parks
Springdale Park, Holyoke, Massachusetts
Thompson Park and roadways, Watertown, New York
Union County, New Jersey park system
Utica, New York Parks and Parkway System (1908–1914)
Landscape of the Town of Vandergrift, Pennsylvania (1895)
Verona Park, Verona, New Jersey
Wade Lagoon, on University Circle, Cleveland
The garden at Welwyn Preserve, Long Island, New York
Warinanco Park, Roselle, New Jersey
Washington State Capitol campus, Olympia, Washington
Watsessing Park, Bloomfield, New Jersey
Weasel Brook Park, Clifton, New Jersey
Weequahic Park, Weequahic section of Newark, New Jersey
The Highlands Neighborhood, Seattle
Barberrys, Nelson Doubleday house, Mill Neck, New York (1919–1924)
"Allgates," Horatio Gates Lloyd house, Coopertown Road, Haverford, Pennsylvania (1911–1915)
References
Olmsted | List of Olmsted works | Engineering | 1,913 |
21,261,020 | https://en.wikipedia.org/wiki/Mushrooms%20Demystified | Mushrooms Demystified: A Comprehensive Guide to the Fleshy Fungi is a mushroom field and identification guide by American mycologist David Arora, published in 1979 and republished in 1986. All That the Rain Promises and More…: A Hip Pocket Guide to Western Mushrooms, a “field companion” to Mushrooms Demystified with cross references to that volume was published in 1991.
References
1979 non-fiction books
Ten Speed Press books | Mushrooms Demystified | Biology | 87 |
11,437,627 | https://en.wikipedia.org/wiki/Davidiella%20tassiana | Davidiella tassiana is a fungal plant pathogen infecting several hosts, including Iris barnumiae subsp. demawendica in Iran.
Infected plant species
Davidiella tassiana has a wide range of host species. These include:
Agrostis canina
Agrostis stolonifera
Anthoxanthum odoratum
Arabis petraea
Bistorta vivipara
Carex bigelowii
Carex capitata
Draba incana
Draba nivalis
Deschampsia caespitosa
Epilobium latifolium
Galium normanii
Gentianella amarella ssp. septentrionalis
Hierochloe odorata
Juncus alpinus
Juncus articulatus
Juncus triglumis
Luzula arcuata
Poa alpina
Poa glauca
Poa nemoralis
Potentilla palustris
Puccinellia distans
Ranunculus glacialis
Rhodiola rosea
Saxifraga caespitosa
Saxifraga hirculus
Thalictrum alpinum
Thymus praecox ssp. arcticus
References
Fungal plant pathogens and diseases
Food plant pathogens and diseases
Davidiellaceae
Fungi described in 1865
Taxa named by Giuseppe De Notaris
Fungus species | Davidiella tassiana | Biology | 260 |
55,466,103 | https://en.wikipedia.org/wiki/Witching%20hour | In folklore, the witching hour or devil's hour is a time of night that is associated with supernatural events, whereby witches, demons and ghosts are thought to appear and be at their most powerful. Definitions vary, and include the hour immediately after midnight and the time between 3:00am and 4:00am. The term now has a widespread colloquial and idiomatic usage that is associated with human physiology and behaviour to more superstitious phenomena, such as luck.
Origins
The phrase "witching hour" began at least as early as 1762, when it appeared in Elizabeth Carolina Keene's Miscellaneous Poems. It alludes to Hamlet's line "Tis now the very witching time of night, When Churchyards yawne, and hell it selfe breakes out Contagion to this world."
Time
There are multiple times that can be considered the witching hour. Some claim the time is between 12:00 am and 1:00 am, while others claim there is increased supernatural activity between sunset and sunrise. The New Zealand Oxford Dictionary identifies midnight as the time when witches are supposedly active.
During the time in which this term originated, many people had sleeping schedules that meant they were awake during the middle of the night. Nonetheless, there is psychological literature suggesting that apparitional experiences and sensed presences are most common between the hours of 2:00 am and 4:00 am, corresponding with a 3:00 am peak in the amount of melatonin in the body.
Physiology
The idea of the witching hour may stem from the human sleep cycle and circadian rhythm – the body is going through REM sleep at that time, where the heart rate is slower, body temperature reduced, breathing pattern and blood pressure irregular. Sudden awakening from REM sleep could cause agitation, fear and disorientation in an individual.
During REM sleep, which usually occurs within the witching hour, unpleasant and frightful sleep disturbances such as parasomnias can be experienced, which include nightmares, rapid eye movement sleep behavior disorder, night terrors, sleepwalking, homicidal sleepwalking and sleep paralysis.
During the night and well into the witching hour, symptoms of illnesses and conditions such as lung disease, asthma, flu and common cold seem to exacerbate because there is less cortisol in the blood late at night and especially during sleep. As such, the immune system becomes very active and white blood cells fight infections in the body during sleep, and this would thereby worsen the symptoms of fever, nasal congestion, cough, chills and sweating.
Colloquial usage
The term may be used colloquially to refer to any period of bad luck, or in which something bad is seen as having a greater likelihood of occurring.
In investing, it is the last hour of stock trading between 3:00 pm (when the U.S. bond market closes) and 4:00 pm EST (when the U.S. stock market closes), a period of above-average volatility.
The term can also refer to a phenomenon where infants or young children cry for an extended period of time during the hour (or two) before their bedtime, becoming irritable and unwieldy with no known cause.
To reduce gun violence, curfew hours in Washington D.C. have been in force between 11:00 pm and 12:00 am to lower juvenile gunfire incidents. Influenced by the idea of "witching hour", this occurs between 11:00 pm and 11:59 pm on weekdays and is referred to as the "switching hour". Furthermore, violent crimes like rape and sexual assault would peak at midnight on average and DUI police incidents would usually tend to occur at around 2:00 am.
See also
Brahmamuhurtha
Canonical hours
Exorcism in Christianity
Sacramentals
Ushi no toki mairi
References
Witchcraft in folklore and mythology
Demons in Christianity
Sleep in mythology and folklore
Canonical hours
Supernatural
Night in culture
English-language idioms
Sleep
Circadian rhythm
Human behavior
Superstitions
Supernatural legends
Satanism in popular culture | Witching hour | Biology | 836 |
1,248,138 | https://en.wikipedia.org/wiki/Manhattan%20wiring | Manhattan wiring (also known as right-angle wiring) is a technique for laying out circuits in computer engineering. Inputs to a circuit (specifically, the interconnects from the inputs) are aligned into a grid, and the circuit "taps" (connects to) them perpendicularly. This may be done either virtually or physically. That is, it may be shown this way only in the documentation and the actual circuit may look nothing like that; or it may be laid out that way on the physical chip. Typically, separate lanes are used for the inverted inputs and are tapped separately.
The name Manhattan wiring relates to its Manhattan geometry. Reminiscent of how streets in Manhattan, New York tend to criss-cross in a very regular grid, it relates to appearance of such circuit diagrams.
Manhattan wiring is often used to represent a programmable logic array.
Alternatives include X-architecture wiring, or 45° wiring, and Y-architecture wiring (using wires running in the 0°, 120°, and 240° directions).
See also
Manhattan metric
References
Electronic circuits | Manhattan wiring | Engineering | 215 |
651,372 | https://en.wikipedia.org/wiki/Evaporative%20cooler | An evaporative cooler (also known as evaporative air conditioner, swamp cooler, swamp box, desert cooler and wet air cooler) is a device that cools air through the evaporation of water. Evaporative cooling differs from other air conditioning systems, which use vapor-compression or absorption refrigeration cycles. Evaporative cooling exploits the fact that water will absorb a relatively large amount of heat in order to evaporate (that is, it has a large enthalpy of vaporization). The temperature of dry air can be dropped significantly through the phase transition of liquid water to water vapor (evaporation). This can cool air using much less energy than refrigeration. In extremely dry climates, evaporative cooling of air has the added benefit of conditioning the air with more moisture for the comfort of building occupants.
The cooling potential for evaporative cooling is dependent on the wet-bulb depression, the difference between dry-bulb temperature and wet-bulb temperature (see relative humidity). In arid climates, evaporative cooling can reduce energy consumption and total equipment for conditioning as an alternative to compressor-based cooling. In climates not considered arid, indirect evaporative cooling can still take advantage of the evaporative cooling process without increasing humidity. Passive evaporative cooling strategies can offer the same benefits as mechanical evaporative cooling systems without the complexity of equipment and ductwork.
History
An earlier form of evaporative cooling, the windcatcher, was first used in ancient Egypt and Persia thousands of years ago in the form of wind shafts on the roof. They caught the wind, passed it over subterranean water in a qanat and discharged the cooled air into the building. Modern Iranians have widely adopted powered evaporative coolers ().
The evaporative cooler was the subject of numerous US patents in the 20th century; many of these, starting in 1906, suggested or assumed the use of excelsior (wood wool) pads as the elements to bring a large volume of water in contact with moving air to allow evaporation to occur. A typical design, as shown in a 1945 patent, includes a water reservoir (usually with level controlled by a float valve), a pump to circulate water over the excelsior pads and a centrifugal fan to draw air through the pads and into the house. This design and this material remain dominant in evaporative coolers in the American Southwest, where they are also used to increase humidity. In the United States, the use of the term swamp cooler may be due to the odor of algae produced by early units.
Externally mounted evaporative cooling devices (car coolers) were used in some automobiles to cool interior air—often as aftermarket accessories—until modern vapor-compression air conditioning became widely available.
Passive evaporative cooling techniques in buildings have been a feature of desert architecture for centuries, but Western acceptance, study, innovation, and commercial application are all relatively recent. In 1974, William H. Goettl noticed how evaporative cooling technology works in arid climates, speculated that a combination unit could be more effective, and invented the "High Efficiency Astro Air Piggyback System", a combination refrigeration and evaporative cooling air conditioner. In 1986, University of Arizona researchers built a passive evaporative cooling tower, and performance data from this experimental facility in Tucson, Arizona became the foundation of evaporative cooling tower design guidelines.
Physical principles
Evaporative coolers lower the temperature of air using the principle of evaporative cooling, unlike typical air conditioning systems which use vapor-compression refrigeration or absorption refrigeration. Evaporative cooling is the conversion of liquid water into vapor using the thermal energy in the air, resulting in a lower air temperature. The energy needed to evaporate the water is taken from the air in the form of sensible heat, which affects the temperature of the air, and converted into latent heat, the energy present in the water vapor component of the air, whilst the air remains at a constant enthalpy value. This conversion of sensible heat to latent heat is known as an isenthalpic process because it occurs at a constant enthalpy value. Evaporative cooling therefore causes a drop in the temperature of air proportional to the sensible heat drop and an increase in humidity proportional to the latent heat gain. Evaporative cooling can be visualized using a psychrometric chart by finding the initial air condition and moving along a line of constant enthalpy toward a state of higher humidity.
A simple example of natural evaporative cooling is perspiration, or sweat, secreted by the body, evaporation of which cools the body. The amount of heat transfer depends on the evaporation rate, however for each kilogram of water vaporized 2,257 kJ of energy (about 890 BTU per pound of pure water, at 95 °F (35 °C)) are transferred. The evaporation rate depends on the temperature and humidity of the air, which is why sweat accumulates more on humid days, as it does not evaporate fast enough.
Vapor-compression refrigeration uses evaporative cooling, but the evaporated vapor is within a sealed system, and is then compressed ready to evaporate again, using energy to do so. A simple
evaporative cooler's water is evaporated into the environment, and not recovered. In an interior space cooling unit, the evaporated water is introduced into the space along with the now-cooled air; in an evaporative tower the evaporated water is carried off in the airflow exhaust.
Other types of phase-change cooling
A closely related process, sublimation cooling, differs from evaporative cooling in that a phase transition from solid to vapor, rather than liquid to vapor, occurs.
Sublimation cooling has been observed to operate on a planetary scale on the planetoid Pluto, where it has been called an anti-greenhouse effect.
Another application of a phase change to cooling is the "self-refrigerating" beverage can. A separate compartment inside the can contains a desiccant and a liquid. Just before drinking, a tab is pulled so that the desiccant comes into contact with the liquid and dissolves. As it does so, it absorbs an amount of heat energy called the latent heat of fusion. Evaporative cooling works with the phase change of liquid into vapor and the latent heat of vaporization, but the self-cooling can uses a change from solid to liquid, and the latent heat of fusion, to achieve the same result.
Applications
Before the advent of modern refrigeration, evaporative cooling was used for millennia, for instance in qanats, windcatchers, and mashrabiyas. A porous earthenware vessel would cool water by evaporation through its walls; frescoes from about 2500 BCE show slaves fanning jars of water to cool rooms. Alternatively, a bowl filled with milk or butter could be placed in another bowl filled with water, all being covered with a wet cloth resting in the water, to keep the milk or butter as fresh as possible (see zeer, botijo and Coolgardie safe).
Evaporative cooling is a common form of cooling buildings for thermal comfort since it is relatively cheap and requires less energy than other forms of cooling.
The figure showing the Salt Lake City weather data represents the typical summer climate (June to September). The colored lines illustrate the potential of direct and indirect evaporative cooling strategies to expand the comfort range in summer time. It is mainly explained by the combination of a higher air speed on one hand and elevated indoor humidity when the region permits the direct evaporative cooling strategy on the other hand. Evaporative cooling strategies that involve the humidification of the air should be implemented in dry condition where the increase in moisture content stays below recommendations for occupant's comfort and indoor air quality. Passive cooling towers lack the control that traditional HVAC systems offer to occupants. However, the additional air movement provided into the space can improve occupant comfort.
Evaporative cooling is most effective when the relative humidity is on the low side, limiting its popularity to dry climates. Evaporative cooling raises the internal humidity level significantly, which desert inhabitants may appreciate as the moist air re-hydrates dry skin and sinuses. Therefore, assessing typical climate data is an essential procedure to determine the potential of evaporative cooling strategies for a building. The three most important climate considerations are dry-bulb temperature, wet-bulb temperature, and wet-bulb depression during a typical summer day. It is important to determine if the wet-bulb depression can provide sufficient cooling during the summer day. By subtracting the wet-bulb depression from the outside dry-bulb temperature, one can estimate the approximate air temperature leaving the evaporative cooler. It is important to consider that the ability for the exterior dry-bulb temperature to reach the wet-bulb temperature depends on the saturation efficiency. A general recommendation for applying direct evaporative cooling is to implement it in places where the wet-bulb temperature of the outdoor air does not exceed . However, in the example of Salt Lake City, the upper limit for the direct evaporative cooling on psychrometric chart is . Despite the lower temperature, evaporative cooling is suitable for similar climates to Salt Lake City.
Evaporative cooling is especially well suited for climates where the air is hot and humidity is low. In the United States, the western and mountain states are good locations, with evaporative coolers prevalent in cities like Albuquerque, Denver, El Paso, Fresno, Salt Lake City, and Tucson. Evaporative air conditioning is also popular and well-suited to the southern (temperate) part of Australia. In dry, arid climates, the installation and operating cost of an evaporative cooler can be much lower than that of refrigerative air conditioning, often by 80% or so. However, evaporative cooling and vapor-compression air conditioning are sometimes used in combination to yield optimal cooling results. Some evaporative coolers may also serve as humidifiers in the heating season. In regions that are mostly arid, short periods of high humidity may prevent evaporative cooling from being an effective cooling strategy. An example of this event is the monsoon season in New Mexico and central and southern Arizona in July and August.
In locations with moderate humidity there are many cost-effective uses for evaporative cooling, in addition to their widespread use in dry climates. For example, industrial plants, commercial kitchens, laundries, dry cleaners, greenhouses, spot cooling (loading docks, warehouses, factories, construction sites, athletic events, workshops, garages, and kennels) and confinement farming (poultry ranches, hog, and dairy) often employ evaporative cooling. In highly humid climates, evaporative cooling may have little thermal comfort benefit beyond the increased ventilation and air movement it provides.
Other examples
Trees transpire large amounts of water through pores in their leaves called stomata, and through this process of evaporative cooling, forests interact with climate at local and global scales.
Simple evaporative cooling devices such as evaporative cooling chambers (ECCs) and clay pot coolers, or pot-in-pot refrigerators, are simple and inexpensive ways to keep vegetables fresh without the use of electricity. Several hot and dry regions throughout the world could potentially benefit from evaporative cooling, including North Africa, the Sahel region of Africa, the Horn of Africa, southern Africa, the Middle East, arid regions of South Asia, and Australia. Benefits of evaporative cooling chambers for many rural communities in these regions include reduced post-harvest loss, less time spent traveling to the market, monetary savings, and increased availability of vegetables for consumption.
Evaporative cooling is commonly used in cryogenic applications. The vapor above a reservoir of cryogenic liquid is pumped away, and the liquid continuously evaporates as long as the liquid's vapor pressure is significant. Evaporative cooling of ordinary helium forms a 1-K pot, which can cool to at least 1.2 K. Evaporative cooling of helium-3 can provide temperatures below 300 mK. These techniques can be used to make cryocoolers, or as components of lower-temperature cryostats such as dilution refrigerators. As the temperature decreases, the vapor pressure of the liquid also falls, and cooling becomes less effective. This sets a lower limit to the temperature attainable with a given liquid.
Evaporative cooling is also the last cooling step in order to reach the ultra-low temperatures required for Bose–Einstein condensation (BEC). Here, so-called forced evaporative cooling is used to selectively remove high-energetic ("hot") atoms from an atom cloud until the remaining cloud is cooled below the BEC transition temperature. For a cloud of 1 million alkali atoms, this temperature is about 1μK.
Although robotic spacecraft use thermal radiation almost exclusively, many crewed spacecraft have short missions that permit open-cycle evaporative cooling. Examples include the Space Shuttle, the Apollo command and service module (CSM), lunar module and portable life support system. The Apollo CSM and the Space Shuttle also had radiators, and the Shuttle could evaporate ammonia as well as water. The Apollo spacecraft used sublimators, compact and largely passive devices that dump waste heat in water vapor (steam) that is vented to space. When liquid water is exposed to vacuum it boils vigorously, carrying away enough heat to freeze the remainder to ice that covers the sublimator and automatically regulates the feedwater flow depending on the heat load. The water expended is often available in surplus from the fuel cells used by many crewed spacecraft to produce electricity.
Designs
Most designs take advantage of the fact that water has one of the highest known enthalpy of vaporization (latent heat of vaporization) values of any common substance. Because of this, evaporative coolers use only a fraction of the energy of vapor-compression or absorption air conditioning systems. Except in very dry climates, the single-stage (direct) cooler can increase relative humidity (RH) to a level that makes occupants uncomfortable. Indirect and two-stage evaporative coolers keep the RH lower.
Direct evaporative cooling
Direct evaporative cooling (open circuit) is used to lower the temperature and increase the humidity of air by using latent heat of evaporation, changing liquid water to water vapor. In this process, the energy in the air does not change. Warm dry air is changed to cool moist air. The heat of the outside air is used to evaporate water. The RH increases to 70 to 90% which reduces the cooling effect of human perspiration. The moist air has to be continually released to outside or else the air becomes saturated and evaporation stops.
A mechanical direct evaporative cooler unit uses a fan to draw air through a wetted membrane, or pad, which provides a large surface area for the evaporation of water into the air. Water is sprayed at the top of the pad so it can drip down into the membrane and continually keep the membrane saturated. Any excess water that drips out from the bottom of the membrane is collected in a pan and recirculated to the top. Single-stage direct evaporative coolers are typically small in size as they only consist of the membrane, water pump, and centrifugal fan. The mineral content of the municipal water supply will cause scaling on the membrane, which will lead to clogging over the life of the membrane. Depending on this mineral content and the evaporation rate, regular cleaning and maintenance are required to ensure optimal performance. Generally, supply air from the single-stage evaporative cooler will need to be exhausted directly (one-through flow) as with direct evaporative cooling. A few design solutions have been conceived to utilize the energy in the air, like directing the exhaust air through two sheets of double glazed windows, thus reducing the solar energy absorbed through the glazing. Compared to energy required to achieve the equivalent cooling load with a compressor, single stage evaporative coolers consume less energy.
Passive direct evaporative cooling can occur anywhere that the evaporatively cooled water can cool a space without the assistance of a fan. This can be achieved through the use of fountains or more architectural designs such as the evaporative downdraft cooling tower, also called a "passive cooling tower". The passive cooling tower design allows outside air to flow in through the top of a tower that is constructed within or next to the building. The outside air comes in contact with water inside the tower either through a wetted membrane or a mister. As water evaporates in the outside air, the air becomes cooler and less buoyant and creates a downward flow in the tower. At the bottom of the tower, an outlet allows the cooler air into the interior. Similar to mechanical evaporative coolers, towers can be an attractive low-energy solution for hot and dry climate as they only require a water pump to raise water to the top of the tower.
Energy savings from using a passive direct evaporating cooling strategy depends on the climate and heat load. For arid climates with a great wet-bulb depression, cooling towers can provide enough cooling during summer design conditions to be net zero. For example, a 371 m2 (4,000 ft2) retail store in Tucson, Arizona with a sensible heat gain of 29.3 kJ/h (100,000 Btu/h) can be cooled entirely by two passive cooling towers providing 11890 m3/h (7,000 cfm) each.
For the Zion National Park visitors' center, which uses two passive cooling towers, the cooling energy intensity was 14.5 MJ/m2 (1.28 kBtu/ft2;), which was 77% less than a typical building in the western United States that uses 62.5 MJ/m2 (5.5 kBtu/ft2). A study of field performance results in Kuwait revealed that power requirements for an evaporative cooler are approximately 75% less than the power requirements for a conventional packaged unit air-conditioner.
Indirect evaporative cooling
Indirect evaporative cooling (closed circuit) is a cooling process that uses direct evaporative cooling in addition to some heat exchanger to transfer the cool energy to the supply air. The cooled moist air from the direct evaporative cooling process never comes in direct contact with the conditioned supply air. The moist air stream is released outside or used to cool other external devices such as solar cells which are more efficient if kept cool. This is done to avoid excess humidity in enclosed spaces, which is not appropriate for residential systems.
Maisotsenko cycle
Indirect cooler manufacturer uses the Maisotsenko cycle (M-Cycle), named after inventor and Professor Dr. Valeriy Maisotsenko, employs an iterative (multi-step) heat exchanger made of a thin recyclable membrane that can reduce the temperature of product air to below the wet-bulb temperature, and can approach the dew point. Testing by the US Department of Energy found that a hybrid M-Cycle combined with a standard compression refrigeration system significantly improved efficiency by between 150 and 400% but was only capable of doing so in the dry western half of the US, and did not recommend being used in the much more humid eastern half of the US. The evaluation found that the system water consumption of 2–3 gallons per cooling ton (12,000 BTUs) was roughly equal in efficiency to the water consumption of new high efficiency power plants. This means the higher efficiency can be utilized to reduce load on the grid without requiring any additional water, and may actually reduce water usage if the source of the power does not have a high efficiency cooling system.
An M-Cycle based system built by Coolerado is currently being used to cool the Data Center for NASA's National Snow and Ice Data Center (NSIDC). The facility is air cooled below 70 degrees Fahrenheit and uses the Coolerado system above that temperature. This is possible because the air handler for the system uses fresh outside air, which allows it to automatically use cool outside ambient air when conditions allow. This avoids running the refrigeration system when unnecessary. It is powered by a solar panel array which also serves as secondary power in case of main power loss.
The system has very high efficiency but, like other evaporative cooling systems, is constrained by the ambient humidity levels, which has limited its adoption for residential use. It may be used as supplementary cooling during times of extreme heat without placing significant additional burden on electrical infrastructure. If a location has excess water supplies or excess desalination capacity it can be used to reduce excessive electrical demand by utilizing water in affordable M-Cycle units. Due to high costs of conventional air conditioning units and extreme limitations of many electrical utility systems, M-Cycle units may be the only appropriate cooling systems suitable for impoverished areas during times of extremely high temperature and high electrical demand. In developed areas, they may serve as supplemental backup systems in case of electrical overload, and can be used to boost efficiency of existing conventional systems.
The M-Cycle is not limited to cooling systems and can be applied to various technologies from Stirling engines to Atmospheric water generators. For cooling applications it can be used in both cross flow and counterflow configurations. Counterflow was found to obtain lower temperatures more suitable for home cooling, but cross flow was found to have a higher coefficient of performance (COP), and is therefore better for large industrial installations.
Unlike traditional refrigeration techniques, the COP of small systems remains high, as they do not require lift pumps or other equipment required for cooling towers. A 1.5 ton/4.4 kW cooling system requires just 200 watts for operation of the fan, giving a COP of 26.4 and an EER rating of 90. This does not take into account the energy required to purify or deliver the water, and is strictly the power required to run the device once water is supplied. Though desalination of water also presents a cost, the latent heat of vaporization of water is nearly 100 times higher than the energy required to purify the water itself. Furthermore, the device has a maximum efficiency of 55%, so its actual COP is much lower than this calculated value. However, regardless of these losses, the effective COP is still significantly higher than a conventional cooling system, even if water must first be purified by desalination. In areas where water is not available in any form, it can be used with a desiccant to recover water using available heat sources, such as solar thermal energy.
Theoretical designs
In the newer but yet-to-be-commercialized "cold-SNAP" design from Harvard's Wyss Institute, a 3D-printed ceramic conducts heat but is half-coated with a hydrophobic material that serves as a moisture barrier. While no moisture is added to the incoming air the relative humidity (RH) does rise a little according to the Temperature-RH formula. Still, the relatively dry air resulting from indirect evaporative cooling allows inhabitants' perspiration to evaporate more easily, increasing the relative effectiveness of this technique. Indirect Cooling is an effective strategy for hot-humid climates that cannot afford to increase the moisture content of the supply air due to indoor air quality and human thermal comfort concerns.
Passive indirect evaporative cooling strategies are rare because this strategy involves an architectural element to act as a heat exchanger (for example a roof). This element can be sprayed with water and cooled through the evaporation of the water on this element. These strategies are rare due to the high use of water, which also introduces the risk of water intrusion and compromising building structure.
Hybrid designs
Two-stage evaporative cooling, or indirect-direct
In the first stage of a two-stage cooler, warm air is pre-cooled indirectly without adding humidity (by passing inside a heat exchanger that is cooled by evaporation on the outside). In the direct stage, the pre-cooled air passes through a water-soaked pad and picks up humidity as it cools. Since the air supply is pre-cooled in the first stage, less humidity is transferred in the direct stage, to reach the desired cooling temperatures. The result, according to manufacturers, is cooler air with a RH between 50 and 70%, depending on the climate, compared to a traditional system that produces about 70–80% relative humidity in the conditioned air.
Evaporative + conventional backup
In another hybrid design, direct or indirect cooling has been combined with vapor-compression or absorption air conditioning to increase the overall efficiency and/or to reduce the temperature below the wet-bulb limit.
Evaporative + passive daytime radiative + thermal insulation
Evaporative cooling can be combined with passive daytime radiative cooling and thermal insulation to enhance cooling power with zero energy use, albeit with an occasional water "re-charge" depending on the climatic zone of the installation. The system, developed by Lu et al. "consists of a solar reflector, a water-rich and IR-emitting evaporative layer, and a vapor-permeable, IR-transparent, and solar-reflecting insulation layer," with the top layer enabling "heat removal through both evaporation and radiation while resisting environmental heating." The system demonstrated 300% higher ambient cooling power than stand-alone passive daytime radiative cooling and could extend the shelf life of food by 40% in cool humid climates and 200% in dry climates without refrigeration.
Membrane dehumidification and evaporative cooling
Conventional evaporative cooling only works with dry air, e.g. when the humidity ratio is below ~0.02 kgwater/kgair. They also require substantial water inputs. To remove these limitations, dewpoint evaporative cooling can be hybridized with membrane dehumidification, using membranes that pass water vapor but block air. Air passing through these membranes can be concentrated with a compressor, so it can be condensed at warmer temperatures. The first configuration with this approach reused the dehumidification water to provide further evaporative cooling. Such an approach can fully provide its own water for evaporative cooling, outperforms a baseline desiccant wheel system under all conditions, and outperforms vapor compression in dry conditions. It can also allow for cooling at higher humidity without the use of refrigerants, many of which have substantial greenhouse gas potential.
Materials
Traditionally, evaporative cooler pads consist of excelsior (aspen wood fiber) inside a containment net, but more modern materials, such as some plastics and melamine paper, are entering use as cooler-pad media. Modern rigid media, commonly 8" or 12" thick, adds more moisture, and thus cools air more than typically much thinner aspen media. Another material which is sometimes used is corrugated cardboard.
Design considerations
Water use
In arid and semi-arid climates, the scarcity of water makes water consumption a concern in cooling system design. From the installed water meters, 420938 L (111,200 gal) of water were consumed during 2002 for the two passive cooling towers at the Zion National Park visitors' center. However, such concerns are addressed by experts who note that electricity generation usually requires a large amount of water, and evaporative coolers use far less electricity, and thus comparable water overall, and cost less overall, compared to chillers.
Shading
Allowing direct solar exposure to any surface which can transfer the extra heat to any part of the air flow through the unit will raise the temperature of the air. If the heat is transferred to the air prior to flowing through the pads, or if the sunlight warms the pads themselves, evaporation will increase, but the additional energy required to achieve this will not come from the energy contained in the ambient air, but will be supplied by the sun, and this will result not only in higher temperatures, but higher humidity as well, just as raising the inlet air temperature by any means, and heating the water prior to distribution over the pad by any means, would do. In addition, sunlight may degrade some media, and other components of the cooler. Therefore, shading is advisable in all circumstances, though the vertical aspect of the pads, and insulation between the exterior and interior horizontal (upwards facing) surfaces to minimise heat transfer will suffice.
Mechanical systems
Apart from fans used in mechanical evaporative cooling, pumps are the only other piece of mechanical equipment required for the evaporative cooling process in both mechanical and passive applications. Pumps can be used for either recirculating the water to the wet media pad or providing water at very high pressure to a mister system for a passive cooling tower. Pump specifications will vary depending on evaporation rates and media pad area. The Zion National Park visitors' center uses a 250 W (1/3 HP) pump.
Exhaust
Exhaust ducts and/or open windows must be used at all times to allow air to continually escape the air-conditioned area. Otherwise, pressure develops and the fan or blower in the system is unable to push much air through the media and into the air-conditioned area. The evaporative system cannot function without exhausting the continuous supply of air from the air-conditioned area to the outside. By optimizing the placement of the cooled-air inlet, along with the layout of the house passages, related doors, and room windows, the system can be used most effectively to direct the cooled air to the required areas. A well-designed layout can effectively scavenge and expel the hot air from desired areas without the need for an above-ceiling ducted venting system. Continuous airflow is essential, so the exhaust windows or vents must not restrict the volume and passage of air being introduced by the evaporative cooling machine. One must also be mindful of the outside wind direction, as, for example, a strong hot southerly wind will slow or restrict the exhausted air from a south-facing window. It is always best to have the downwind windows open, while the upwind windows are closed.
Different types of installations
Typical installations
Typically, residential and industrial evaporative coolers use direct evaporation, and can be described as an enclosed metal or plastic box with vented sides. Air is moved by a centrifugal fan or blower (usually driven by an electric motor with pulleys known as "sheaves" in HVAC terminology, or a direct-driven axial fan), and a water pump is used to wet the evaporative cooling pads. The cooling units can be mounted on the roof (down draft, or downflow) or exterior walls or windows (side draft, or horizontal flow) of buildings. To cool, the fan draws ambient air through vents on the unit's sides and through the damp pads. Heat in the air evaporates water from the pads which are constantly re-dampened to continue the cooling process. Then cooled, moist air is delivered into the building via a vent in the roof or wall.
Because the cooling air originates outside the building, one or more large vents must exist to allow air to move from inside to outside. Air should only be allowed to pass once through the system, or the cooling effect will decrease. This is due to the air reaching the saturation point. Often 15 or so air changes per hour (ACHs) occur in spaces served by evaporative coolers, a relatively high rate of air exchange.
Evaporative (wet) cooling towers
Cooling towers are structures for cooling water or other heat transfer media to near-ambient wet-bulb temperature. Wet cooling towers operate on the evaporative cooling principle, but are optimized to cool the water rather than the air. Cooling towers can often be found on large buildings or on industrial sites. They transfer heat to the environment from chillers, industrial processes, or the Rankine power cycle, for example.
Misting systems
Misting systems work by forcing water via a high pressure pump and tubing through a brass and stainless steel mist nozzle that has an orifice of about 5 micrometres, thereby producing a micro-fine mist. The water droplets that create the mist are so small that they instantly flash-evaporate. Flash evaporation can reduce the surrounding air temperature by as much as 35 °F (20 °C) in just seconds. For patio systems, it is ideal to mount the mist line approximately 8 to 10 feet (2.4 to 3.0 m) above the ground for optimum cooling. Misting is used for applications such as flowerbeds, pets, livestock, kennels, insect control, odor control, zoos, veterinary clinics, cooling of produce, and greenhouses.
Misting fans
A misting fan is similar to a humidifier. A fan blows a fine mist of water into the air. If the air is not too humid, the water evaporates, absorbing heat from the air, allowing the misting fan to also work as an air cooler. A misting fan may be used outdoors, especially in a dry climate. It may also be used indoors.
Small portable battery-powered misting fans, consisting of an electric fan and a hand-operated water spray pump, are sold as novelty items. Their effectiveness in everyday use is unclear.
Performance
Understanding evaporative cooling performance requires an understanding of psychrometrics. Evaporative cooling performance is variable due to changes in external temperature and humidity level. A residential cooler should be able to decrease the temperature of air to within of the wet bulb temperature.
It is simple to predict cooler performance from standard weather report information. Because weather reports usually contain the dewpoint and relative humidity, but not the wet-bulb temperature, a psychrometric chart or a simple computer program must be used to compute the wet bulb temperature. Once the wet bulb temperature and the dry bulb temperature are identified, the cooling performance or leaving air temperature of the cooler may be determined.
For direct evaporative cooling, the direct saturation efficiency, , measures in what extent the temperature of the air leaving the direct evaporative cooler is close to the wet-bulb temperature of the entering air. The direct saturation efficiency can be determined as follows:
Where:
= direct evaporative cooling saturation efficiency (%)
= entering air dry-bulb temperature (°C)
= leaving air dry-bulb temperature (°C)
= entering air wet-bulb temperature (°C)
Evaporative media efficiency usually runs between 80% and 90%. Most efficient systems can lower the dry air temperature to 95% of the wet-bulb temperature, the least efficient systems only achieve 50%. The evaporation efficiency drops very little over time.
Typical aspen pads used in residential evaporative coolers offer around 85% efficiency while CELdek type of evaporative media offer efficiencies of >90% depending on air velocity. The CELdek media is more often used in large commercial and industrial installations.
As an example, in Las Vegas, with a typical summer design day of dry bulb and wet bulb temperature or about 8% relative humidity, the leaving air temperature of a residential cooler with 85% efficiency would be:
= 42 °C – [(42 °C – 19 °C) × 85%] =
However, either of two methods can be used to estimate performance:
Use a psychrometric chart to calculate wet bulb temperature, and then add 5–7 °F as described above.
Use a rule of thumb which estimates that the wet bulb temperature is approximately equal to the ambient temperature, minus one third of the difference between the ambient temperature and the dew point. As before, add 5–7 °F as described above.
Some examples clarify this relationship:
At and 15% relative humidity, air may be cooled to nearly . The dew point for these conditions is .
At 32 °C and 50% relative humidity, air may be cooled to about . The dew point for these conditions is .
At and 15% relative humidity, air may be cooled to nearly . The dew point for these conditions is .
(Cooling examples extracted from the June 25, 2000 University of Idaho publication, "Homewise").
Because evaporative coolers perform best in dry conditions, they are widely used and most effective in arid, desert regions such as the southwestern USA, northern Mexico, and Rajasthan.
The same equation indicates why evaporative coolers are of limited use in highly humid environments: for example, a hot August day in Tokyo may be with 85% relative humidity, 1,005 hPa pressure. This gives a dew point of and a wet-bulb temperature of . According to the formula above, at 85% efficiency air may be cooled only down to which makes it quite impractical.
Comparison to other types of air conditioning
Comparison of evaporative cooling to refrigeration-based air conditioning:
Advantages
Less expensive to install and operate
Estimated cost for professional installation is about half or less that of central refrigerated air conditioning.
Estimated cost of operation is 1/8 that of refrigerated air conditioning.
No power spike when turned on due to lack of a compressor.
Power consumption is limited to the fan and water pump, which have a relatively low current draw at start-up.
The working fluid is water. No special refrigerants, such as ammonia or CFCs, are used that could be toxic, expensive to replace, contribute to ozone depletion and/or be subject to stringent licensing and environmental regulations.
Newly launched air coolers can be operated though remote control.
Ease of installation and maintenance
Equipment can be installed by mechanically-inclined users at drastically lower cost than refrigeration equipment which requires specialized skills and professional installation.
The only two mechanical parts in most basic evaporative coolers are the fan motor and the water pump, both of which can be repaired or replaced at low cost and often by a mechanically inclined user, eliminating costly service calls to HVAC contractors.
Ventilation air
The frequent and high volumetric flow rate of air traveling through the building reduces the "age-of-air" in the building dramatically.
Evaporative cooling increases humidity. In dry climates, this may improve comfort and decrease static electricity problems.
The pad itself acts as a rather effective air filter when properly maintained; it is capable of removing a variety of contaminants in air, including urban ozone caused by pollution, regardless of very dry weather. Refrigeration-based cooling systems lose this ability whenever there is not enough humidity in the air to keep the evaporator wet while providing a frequent trickle of condensation that washes out dissolved impurities removed from the air.
Disadvantages
Performance
Most evaporative coolers are unable to reach as low a temperature as refrigerated air conditioning systems.
High dewpoint (humidity) conditions decrease the cooling capability of the evaporative cooler.
No dehumidification. Traditional air conditioners remove moisture from the air, except in very dry locations where recirculation can lead to a buildup of humidity. Evaporative cooling adds moisture, and in humid climates, dryness may improve thermal comfort at higher temperatures.
Comfort
The air supplied by the evaporative cooler is generally 80–90% relative humidity and can cause interior humidity levels as high as 65%; very humid air reduces the evaporation rate of moisture from the skin, nose, lungs, and eyes.
High humidity in air accelerates corrosion, particularly in the presence of dust. This can considerably reduce the life of electronics and other equipment.
High humidity in air may cause condensation of water. This can be a problem for some situations (e.g., electrical equipment, computers, paper, books, old wood).
Odors and other outdoor contaminants may be blown into the building unless sufficient filtering is in place.
Water use
Evaporative coolers require a constant supply of water.
Water high in mineral content (hard water) will leave mineral deposits on the pads and interior of the cooler. Depending on the type and concentration of minerals, possible safety hazards during the replacement and waste removal of the pads could be present. Bleed-off and refill (purge pump) systems can reduce but not eliminate this problem. Installation of an inline water filter (refrigerator drinking water/ice maker type) will drastically reduce the mineral deposits.
Maintenance frequency
Any mechanical components that can rust or corrode need regular cleaning or replacement due to the environment of high moisture and potentially heavy mineral deposits in areas with hard water.
Evaporative media must be replaced on a regular basis to maintain cooling performance. Wood wool pads are inexpensive but require replacement every few months. Higher-efficiency rigid media is much more expensive but will last for a number of years proportional to the water hardness; in areas with very hard water, rigid media may only last for two years before mineral scale build-up unacceptably degrades performance.
In areas with cold winters, evaporative coolers must be drained and winterized to protect the water line and cooler from freeze damage and then de-winterized prior to the cooling season.
Health hazards
An evaporative cooler is a common place for mosquito breeding. Numerous authorities consider an improperly maintained cooler to be a threat to public health.
Mold and bacteria may be dispersed into interior air from improperly maintained or defective systems, causing sick building syndrome and adverse effects for asthma and allergy sufferers. This can also cause a foul odor.
Wood wool of dry cooler pads can catch fire even from small sparks.
See also
Architectural engineering
Botijo
Building engineering
Coolgardie safe
Cooling tower
Dehumidifier
Humidifier
HVAC
Legionnaire's disease
Pot-in-pot refrigerator
Yakhchāl
References
External links
Psychrometrics
Cooling technology
Heating, ventilation, and air conditioning
Evaporators | Evaporative cooler | Chemistry,Engineering | 8,637 |
1,977,935 | https://en.wikipedia.org/wiki/Tyan | Tyan Computer Corporation (泰安電腦科技股份有限公司; also known as Tyan Business Unit, or TBU) is a subsidiary of MiTAC International, and a manufacturer of computer motherboards, including models for both AMD and Intel processors. They develop and produce high-end server, SMP, and desktop barebones systems as well as provide design and production services to tier 1 global OEMs, and a number of other regional OEMs.
Founding
The company was founded in 1989 by Dr. T. Symon Chang, a veteran of IBM and Intel. At that time, Dr. Chang saw an empty space in the market in which there were no strong players for the SMP server space, and as such he founded Tyan in order to develop, produce and deliver such products, starting with a dual Intel Pentium-series motherboard as well as a number of other single processor motherboards all geared towards server applications.
Since then, Tyan has produced a number of single and multi-processor (as well as multi-core) products using technology from many well-known companies (e.g. Intel, AMD, NVIDIA, Broadcom and many more). Notable design wins include that of Dawning corporation for the fastest supercomputer (twice); first to market with a dual AMD Athlon MP server platform; winner of the Maximum PC Kick-Ass Award (twice) for their contributions to the Dream Machine (most recently, the 2005 edition); and first to market with an eight (8) GPU server platform (the FT72-B7015).
Later company history
Tyan is headquartered in Taipei, Taiwan, separated between three buildings in the Nei-Hu industrial district. All three buildings belong to the parent company, MiTAC. The North American headquarter is in Newark, California, which is the same North American headquarter for MiTAC.
The merger in question was with MiTAC, a Taiwanese OEM which develops and produces a range of products (including servers, notebooks, consumer electronics products, networking and educational products - as well as providing contract manufacturing services), was announced in March 2007 and completed on October 1 of that year. Under the umbrella of MiTAC, Tyan acts as the brand leader and core engineering and marketing arm for delivery of server and workstation products to the distribution and reseller channel, and continues to act as a design and production services house for OEM customers.
MiTAC International Corp. spun off the Cloud Computing Business Group to the newly incorporated MiTAC Computing Technology Corporation on 1 September 2014. TYAN is a leading server brand of MiTAC Computing Technology Corp. under the MiTAC Group.
TYAN launched the first OpenPOWER reference system based on the IBM POWER8 architecture in Oct 2014. TYAN is one of the founding members of the OpenPOWER Foundation, which was established in 2013.
External links
Tyan Computer Corp.
MiTAC.com , Tyan's parent company
Tyan's Chinese website
1989 establishments in Taiwan
Computer companies of Taiwan
Companies established in 1989
Computer hardware companies
Computer systems companies
Motherboard companies
Companies based in Taipei
Electronics companies of Taiwan
Taiwanese brands
Data centers | Tyan | Technology | 657 |
75,254,121 | https://en.wikipedia.org/wiki/UHZ1 | UHZ1 is a background galaxy containing a quasar. At a redshift of approximately 10.1, UHZ1 is at a distance of 13.2 billion light-years, seen when our universe was about 3 percent of its current age. This redshift made it the most distant, and therefore earliest known quasar in the observable universe as of 2023. To detect this object, astronomers working at the Chandra X-ray Observatory used the Abell 2744's cluster mass as a gravitational lens in order to magnify distant objects directly behind it. At the time of discovery, it exceeded the distance record of QSO J0313−1806.
The discovery of this object has led astronomers to suggest the seeds of the first quasars may have been direct-collapse black holes, from the collapse of supermassive primordial stars at the beginning of our universe.
Impact on astronomical research
The Chandra-JWST discovery of a quasar with a redshift of ≈ 10.1 at the center of UHZ1 reveals that accreting supermassive black holes (SMBHs) already existed at about 470 million years after the Big Bang. The detection of early black holes as they transition from "seeds" to supermassive black holes (BHs) provides good sources at high redshift, facilitating the testing on seeding and growth models for BHs.
One of the open questions about the formation of supermassive BHs is whether they originate from stellar-mass black holes, remnants from the death of massive stars or whether there are mechanisms that operate to form heavier initial seeds to begin its formation. UHZ1's data shows it requires either continuous growth exceeding the Eddington limit for >200 Myr, or a massive seed. Data collected provides a clue to the seeding mechanism and supports it.
UHZ1 as a potential first OBG candidate
The Chandra X-ray source detected in UHZ1 is Compton-thick. It has a bolometric luminosity of Lbol ~ 5 erg s, corresponding to an estimated BH mass of ~ 4 M⊙.
The data collected from UHZ1 and its quasar are in agreement with prior theoretical predictions by astronomers for a unique class of transient, high-redshift objects known as Overmassive (or Outsize) Black Hole Galaxies (OBGs, or O.B.G.s). OBGs are heavy initial black hole seeds that likely formed from the direct collapse of gas clouds. Due to the agreement between the multi-wavelength properties of UHZ1 and the theoretical model template predictions, some astronomers suggest UHZ1 is the first detected OBG candidate.
Footnotes
References
External links
Sculptor (constellation)
Quasars | UHZ1 | Astronomy | 570 |
48,510 | https://en.wikipedia.org/wiki/Terrestrial%20planet | A terrestrial planet, tellurian planet, telluric planet, or rocky planet, is a planet that is composed primarily of silicate, rocks or metals. Within the Solar System, the terrestrial planets accepted by the IAU are the inner planets closest to the Sun: Mercury, Venus, Earth and Mars. Among astronomers who use the geophysical definition of a planet, two or three planetary-mass satellites – Earth's Moon, Io, and sometimes Europa – may also be considered terrestrial planets. The large rocky asteroids Pallas and Vesta are sometimes included as well, albeit rarely. The terms "terrestrial planet" and "telluric planet" are derived from Latin words for Earth (Terra and Tellus), as these planets are, in terms of structure, Earth-like. Terrestrial planets are generally studied by geologists, astronomers, and geophysicists.
Terrestrial planets have a solid planetary surface, making them substantially different from larger gaseous planets, which are composed mostly of some combination of hydrogen, helium, and water existing in various physical states.
Structure
All terrestrial planets in the Solar System have the same basic structure, such as a central metallic core (mostly iron) with a surrounding silicate mantle.
The large rocky asteroid 4 Vesta has a similar structure; possibly so does the smaller one 21 Lutetia. Another rocky asteroid 2 Pallas is about the same size as Vesta, but is significantly less dense; it appears to have never differentiated a core and a mantle. The Earth's Moon and Jupiter's moon Io have similar structures to terrestrial planets, but Earth's Moon has a much smaller iron core. Another Jovian moon Europa has a similar density but has a significant ice layer on the surface: for this reason, it is sometimes considered an icy planet instead.
Terrestrial planets can have surface structures such as canyons, craters, mountains, volcanoes, and others, depending on the presence at any time of an erosive liquid or tectonic activity or both.
Terrestrial planets have secondary atmospheres, generated by volcanic out-gassing or from comet impact debris. This contrasts with the outer, giant planets, whose atmospheres are primary; primary atmospheres were captured directly from the original solar nebula.
Terrestrial planets within the Solar System
The Solar System has four terrestrial planets under the dynamical definition: Mercury, Venus, Earth and Mars. The Earth's Moon as well as Jupiter's moons Io and Europa would also count geophysically, as well as perhaps the large protoplanet-asteroids Pallas and Vesta (though those are borderline cases). Among these bodies, only the Earth has an active surface hydrosphere. Europa is believed to have an active hydrosphere under its ice layer.
During the formation of the Solar System, there were many terrestrial planetesimals and proto-planets, but most merged with or were ejected by the four terrestrial planets, leaving only Pallas and Vesta to survive more or less intact. These two were likely both dwarf planets in the past, but have been battered out of equilibrium shapes by impacts. Some other protoplanets began to accrete and differentiate but suffered catastrophic collisions that left only a metallic or rocky core, like 16 Psyche or 8 Flora respectively. Many S-type and M-type asteroids may be such fragments.
The other round bodies from the asteroid belt outward are geophysically icy planets. They are similar to terrestrial planets in that they have a solid surface, but are composed of ice and rock rather than of rock and metal. These include the dwarf planets, such as Ceres, Pluto and Eris, which are found today only in the regions beyond the formation snow line where water ice was stable under direct sunlight in the early Solar System. It also includes the other round moons, which are ice-rock (e.g. Ganymede, Callisto, Titan, and Triton) or even almost pure (at least 99%) ice (Tethys and Iapetus). Some of these bodies are known to have subsurface hydrospheres (Ganymede, Callisto, Enceladus, and Titan), like Europa, and it is also possible for some others (e.g. Ceres, Mimas, Dione, Miranda, Ariel, Triton, and Pluto). Titan even has surface bodies of liquid, albeit liquid methane rather than water. Jupiter's Ganymede, though icy, does have a metallic core like the Moon, Io, Europa, and the terrestrial planets.
The name Terran world has been suggested to define all solid worlds (bodies assuming a rounded shape), without regard to their composition. It would thus include both terrestrial and icy planets.
Density trends
The uncompressed density of a terrestrial planet is the average density its materials would have at zero pressure. A greater uncompressed density indicates a greater metal content. Uncompressed density differs from the true average density (also often called "bulk" density) because compression within planet cores increases their density; the average density depends on planet size, temperature distribution, and material stiffness as well as composition.
Calculations to estimate uncompressed density inherently require a model of the planet's structure. Where there have been landers or multiple orbiting spacecraft, these models are constrained by seismological data and also moment of inertia data derived from the spacecraft's orbits. Where such data is not available, uncertainties are inevitably higher.
The uncompressed densities of the rounded terrestrial bodies directly orbiting the Sun trend towards lower values as the distance from the Sun increases, consistent with the temperature gradient that would have existed within the primordial solar nebula. The Galilean satellites show a similar trend going outwards from Jupiter; however, no such trend is observable for the icy satellites of Saturn or Uranus. The icy worlds typically have densities less than 2 g·cm−3. Eris is significantly denser (), and may be mostly rocky with some surface ice, like Europa. It is unknown whether extrasolar terrestrial planets in general will follow such a trend.
The data in the tables below are mostly taken from a list of gravitationally rounded objects of the Solar System and planetary-mass moon. All distances from the Sun are averages.
Extrasolar terrestrial planets
Most of the planets discovered outside the Solar System are giant planets, because they are more easily detectable. But since 2005, hundreds of potentially terrestrial extrasolar planets have also been found, with several being confirmed as terrestrial. Most of these are super-Earths, i.e. planets with masses between Earth's and Neptune's; super-Earths may be gas planets or terrestrial, depending on their mass and other parameters.
During the early 1990s, the first extrasolar planets were discovered orbiting the pulsar PSR B1257+12, with masses of 0.02, 4.3, and 3.9 times that of Earth, by pulsar timing.
When 51 Pegasi b, the first planet found around a star still undergoing fusion, was discovered, many astronomers assumed it to be a gigantic terrestrial, because it was assumed no gas giant could exist as close to its star (0.052 AU) as 51 Pegasi b did. It was later found to be a gas giant.
In 2005, the first planets orbiting a main-sequence star and which showed signs of being terrestrial planets were found: Gliese 876 d and OGLE-2005-BLG-390Lb. Gliese 876 d orbits the red dwarf Gliese 876, 15 light years from Earth, and has a mass seven to nine times that of Earth and an orbital period of just two Earth days. OGLE-2005-BLG-390Lb has about 5.5 times the mass of Earth and orbits a star about 21,000 light-years away in the constellation Scorpius.
From 2007 to 2010, three (possibly four) potential terrestrial planets were found orbiting within the Gliese 581 planetary system. The smallest, Gliese 581e, is only about 1.9 Earth masses, but orbits very close to the star. Two others, Gliese 581c and the disputed Gliese 581d, are more-massive super-Earths orbiting in or close to the habitable zone of the star, so they could potentially be habitable, with Earth-like temperatures.
Another possibly terrestrial planet, HD 85512 b, was discovered in 2011; it has at least 3.6 times the mass of Earth.
The radius and composition of all these planets are unknown.
The first confirmed terrestrial exoplanet, Kepler-10b, was found in 2011 by the Kepler space telescope, specifically designed to discover Earth-size planets around other stars using the transit method.
In the same year, the Kepler space telescope mission team released a list of 1235 extrasolar planet candidates, including six that are "Earth-size" or "super-Earth-size" (i.e. they have a radius less than twice that of the Earth) and in the habitable zone of their star.
Since then, Kepler has discovered hundreds of planets ranging from Moon-sized to super-Earths, with many more candidates in this size range (see image).
In 2016, statistical modeling of the relationship between a planet's mass and radius using a broken power law appeared to suggest that the transition point between rocky, terrestrial worlds and mini-Neptunes without a defined surface was in fact very close to Earth and Venus's, suggesting that rocky worlds much larger than our own are in fact quite rare. This resulted in some advocating for the retirement of the term "super-earth" as being scientifically misleading. Since 2016 the catalog of known exoplanets has increased significantly, and there have been several published refinements of the mass-radius model. As of 2024, the expected transition point between rocky and intermediate-mass planets sits at roughly 4.4 earth masses, and roughly 1.6 earth radii.
In September 2020, astronomers using microlensing techniques reported the detection, for the first time, of an Earth-mass rogue planet (named OGLE-2016-BLG-1928) unbounded by any star, and free-floating in the Milky Way galaxy.
List of terrestrial exoplanets
The following exoplanets have a density of at least 5 g/cm3 and a mass below Neptune's and are thus very likely terrestrial:
Kepler-10b, Kepler-20b, Kepler-36b, Kepler-48d, Kepler 68c, Kepler-78b, Kepler-89b, Kepler-93b, Kepler-97b, Kepler-99b, Kepler-100b, Kepler-101c, Kepler-102b, Kepler-102d, Kepler-113b, Kepler-131b, Kepler-131c, Kepler-138c, Kepler-406b, Kepler-406c, Kepler-409b.
Frequency
In 2013, astronomers reported, based on Kepler space mission data, that there could be as many as 40 billion Earth- and super-Earth-sized planets orbiting in the habitable zones of Sun-like stars and red dwarfs within the Milky Way. Eleven billion of these estimated planets may be orbiting Sun-like stars. The nearest such planet may be 12 light-years away, according to the scientists. However, this does not give estimates for the number of extrasolar terrestrial planets, because there are planets as small as Earth that have been shown to be gas planets (see Kepler-138d).
Estimates show that about 80% of potentially habitable worlds are covered by land, and about 20% are ocean planets. Planets with rations more like those of Earth, which was 30% land and 70% ocean, only make up 1% of these worlds.
Types
Several possible classifications for solid planets have been proposed.
Silicate planet
A solid planet like Venus, Earth, or Mars, made primarily of a silicon-based rocky mantle with a metallic (iron) core.
Carbon planet (also called "diamond planet")
A theoretical class of planets, composed of a metal core surrounded by primarily carbon-based minerals. They may be considered a type of terrestrial planet if the metal content dominates. The Solar System contains no carbon planets but does have carbonaceous asteroids, such as Ceres and Hygiea. It is unknown if Ceres has a rocky or metallic core.
Iron planet
A theoretical type of solid planet that consists almost entirely of iron and therefore has a greater density and a smaller radius than other solid planets of comparable mass. Mercury in the Solar System has a metallic core equal to 60–70% of its planetary mass, and is sometimes called an iron planet, though its surface is made of silicates and is iron-poor. Iron planets are thought to form in the high-temperature regions close to a star, like Mercury, and if the protoplanetary disk is rich in iron.
Icy planet
A type of solid planet with an icy surface of volatiles. In the Solar System, most planetary-mass moons (such as Titan, Triton, and Enceladus) and many dwarf planets (such as Pluto and Eris) have such a composition. Europa is sometimes considered an icy planet due to its surface ice, but its higher density indicates that its interior is mostly rocky. Such planets can have internal saltwater oceans and cryovolcanoes erupting liquid water (i.e. an internal hydrosphere, like Europa or Enceladus); they can have an atmosphere and hydrosphere made from methane or nitrogen (like Titan). A metallic core is possible, as exists on Ganymede.
Coreless planet
A theoretical type of solid planet that consists of silicate rock but has no metallic core, i.e. the opposite of an iron planet. Although the Solar System contains no coreless planets, chondrite asteroids and meteorites are common in the Solar System. Ceres and Pallas have mineral compositions similar to carbonaceous chondrites, though Pallas is significantly less hydrated. Coreless planets are thought to form farther from the star where volatile oxidizing material is more common.
See also
Chthonian planet
Earth analog
List of potentially habitable exoplanets
Planetary habitability
Venus zone
List of gravitationally rounded objects of the Solar System
References
Types of planet
Solar System | Terrestrial planet | Astronomy | 2,952 |
72,202,102 | https://en.wikipedia.org/wiki/DisCoCat | DisCoCat (Categorical Compositional Distributional) is a mathematical framework for natural language processing which uses category theory to unify distributional semantics with the principle of compositionality. The grammatical derivations in a categorial grammar (usually a pregroup grammar) are interpreted as linear maps acting on the tensor product of word vectors to produce the meaning of a sentence or a piece of text. String diagrams are used to visualise information flow and reason about natural language semantics.
History
The framework was first introduced by Bob Coecke, Mehrnoosh Sadrzadeh, and Stephen Clark as an application of categorical quantum mechanics to natural language processing. It started with the observation that pregroup grammars and quantum processes shared a common mathematical structure: they both form a rigid category (also known as a non-symmetric compact closed category). As such, they both benefit from a graphical calculus, which allows a purely diagrammatic reasoning. Although the analogy with quantum mechanics was kept informal at first, it eventually led to the development of quantum natural language processing.
Definition
There are multiple definitions of DisCoCat in the literature, depending on the choice made for the compositional aspect of the model. The common denominator between all the existent versions, however, always involves a categorical definition of DisCoCat as a structure-preserving functor from a category of grammar to a category of semantics, which usually encodes the distributional hypothesis.
The original paper used the categorical product of FinVect with a pregroup seen as a posetal category. This approach has some shortcomings: all parallel arrows of a posetal category are equal, which means that pregroups cannot distinguish between different grammatical derivations for the same syntactically ambiguous sentence. A more intuitive manner of saying the same is that one works with diagrams rather than with partial orders when describing grammar.
This problem is overcome when one considers the free rigid category generated by the pregroup grammar. That is, has generating objects for the words and the basic types of the grammar, and generating arrows for the dictionary entries which assign a pregroup type to a word . The arrows are grammatical derivations for the sentence which can be represented as string diagrams with cups and caps, i.e. adjunction units and counits.
With this definition of pregroup grammars as free rigid categories, DisCoCat models can be defined as strong monoidal functors . Spelling things out in detail, they assign a finite dimensional vector space to each basic type and a vector in the appropriate tensor product space to each dictionary entry where (objects for words are sent to the monoidal unit, i.e. ). The meaning of a sentence is then given by a vector which can be computed as the contraction of a tensor network.
The reason behind the choice of as the category of semantics is that vector spaces are the usual setting of distributional reading in computational linguistics and natural language processing. The underlying idea of distributional hypothesis "A word is characterized by the company it keeps" is particularly relevant when assigning meaning to words like adjectives or verbs, whose semantic connotation is strongly dependent on context.
Variations
Variations of DisCoCat have been proposed with a different choice for the grammar category. The main motivation behind this lies in the fact that pregroup grammars have been proved to be weakly equivalent to context-free grammars. One example of variation chooses Combinatory categorial grammar as the grammar category.
List of linguistic phenomena
The DisCoCat framework has been used to study the following phenomena from linguistics.
Entailment
Coordination
Hyponymy and hypernymy
Ambiguity with density matrices
Discourse analysis
Anaphora and ellipsis
Language evolution
Applications in NLP
The DisCoCat framework has been applied to solve the following tasks in natural language processing.
Word-sense disambiguation
Semantic similarity
Question answering
Machine translation
Anaphora resolution
See also
Lambek calculus
Pregroup grammar
Distributional semantics
Principle of compositionality
String diagram
Categorical quantum mechanics
Quantum natural language processing
External links
DisCoPy, a Python toolkit for computing with string diagrams
lambeq, a Python library for quantum natural language processing
References
Computational linguistics
Category theory | DisCoCat | Mathematics,Technology | 835 |
3,181,893 | https://en.wikipedia.org/wiki/Mantrap%20%28snare%29 | A mantrap is a mechanical physical security device for catching poachers, art thieves and other trespassers. They have taken many forms, the most usual being similar to a large foothold trap, the steel springs being armed with teeth which meet in the victim's leg. In 1827, they were made illegal in England, except in houses between sunset and sunrise as a defence against burglars.
Other traps such as special snares, trap netting, trapping pits, fluidizing solid matter traps and cage traps could be used.
Mantraps that use deadly force are illegal in the United States, and in notable tort law cases the trespasser has successfully sued the property owner for damages caused by the mantrap. There is also the possibility that such traps could endanger emergency service personnel such as firefighters who must forcefully enter such buildings during emergencies. As noted in the important American court case of Katko v. Briney, "the law has always placed a higher value upon human safety than upon mere rights of property".
See also
Animal trapping
Amappo
Spring-gun
Mantrap (access control)
References
External links
Area denial weapons
Hunting equipment | Mantrap (snare) | Engineering | 236 |
64,355,593 | https://en.wikipedia.org/wiki/Sherman%20function | The Sherman function describes the dependence of electron-atom scattering events on the spin of the scattered electrons. It was first evaluated theoretically by the physicist Noah Sherman and it allows the measurement of polarization of an electron beam by Mott scattering experiments. A correct evaluation of the Sherman function associated to a particular experimental setup is of vital importance in experiments of spin polarized photoemission spectroscopy, which is an experimental technique which allows to obtain information about the magnetic behaviour of a sample.
Background
Polarization and spin-orbit coupling
When an electron beam is polarized, an unbalance between spin-up, , and spin-down electrons, , exists. The unbalance can be evaluated through the polarization defined as
.
It is known that, when an electron collides against a nucleus, the scattering event is governed by Coulomb interaction. This is the leading term in the Hamiltonian, but a correction due to spin-orbit coupling can be taken into account and the effect on the Hamiltonian can be evaluated with the perturbation theory. Spin orbit interaction can be evaluated, in the rest reference frame of the electron, as the result of the interaction of the spin magnetic moment of the electron
with the magnetic field that the electron sees, due to its orbital motion around the nucleus, whose expression in the non-relativistic limit is:
In these expressions is the spin angular-momentum, is the Bohr magneton, is the g-factor, is the reduced Planck constant, is the electron mass, is the elementary charge, is the speed of light, is the potential energy of the electron and is the angular momentum.
Due to spin orbit coupling, a new term will appear in the Hamiltonian, whose expression is
.
Due to this effect, electrons will be scattered with different probabilities at different angles. Since the spin-orbit coupling is enhanced when the involved nuclei possess a high atomic number Z, the target is usually made of heavy metals, such as mercury, gold and thorium.
Asymmetry
If we place two detectors at the same angle from the target, one on the right and one on the left, they will generally measure a different number of electrons and . Consequently it is possible to define the asymmetry , as
.
The Sherman function is a measure of the probability of a spin-up electron to be scattered, at a specific angle , to the right or to the left of the target, due to spin-orbit coupling. It can assume values ranging from -1 (spin-up electron is scattered with 100% probability to the left of the target) to +1 (spin-up electron is scattered with 100% probability to the right of the target). The value of the Sherman function depends on the energy of the incoming electron, evaluated via the parameter . When , spin-up electrons will be scattered with the same probability to the right and to the left of the target.
Then it is possible to write
Plugging these formulas inside the definition of asymmetry, it is possible to obtain a simple expression for the evaluation of the asymmetry at a specific angle , i.e.:
.
Theoretical calculations are available for different atomic targets and for a specific target, as a function of the angle.
Application
To measure the polarization of an electron beam, a Mott detector is required. In order to maximize the spin-orbit coupling, it is necessary that the electrons arrive near to the nuclei of the target. To achieve this condition, a system of electron optics is usually present, in order to accelerate the beam up to keV or to MeV energies. Since standard electron detectors count electrons being insensitive to their spin, after the scattering with the target any information about the original polarization of the beam is lost. Nevertheless, by measuring the difference in the counts of the two detectors, the asymmetry can be evaluated and, if the Sherman function is known from previous calibration, the polarization can be calculated by inverting the last formula.
In order to characterize completely the in-plane polarization, setups are available, with four channeltrons, two devoted to the left-right measure and two devoted to the up-right measure.
Example
In the panel it is shown an example of the working principle of a Mott detector, supposing a value for . If an electron beam with a 3:1 ratio of spin-up over spin-down electrons collide with the target, it will be splitted with a ratio 5:3, according to previous equation, with an asymmetry of 25%.
See also
Spin–orbit interaction
Mott scattering
Photoemission spectroscopy
References
Electron beam
Foundational quantum physics
Scattering | Sherman function | Physics,Chemistry,Materials_science | 949 |
446,223 | https://en.wikipedia.org/wiki/Gene%20knockdown | Gene knockdown is an experimental technique by which the expression of one or more of an organism's genes is reduced. The reduction can occur either through genetic modification or by treatment with a reagent such as a short DNA or RNA oligonucleotide that has a sequence complementary to either gene or an mRNA transcript.
Versus transient knockdown
If a DNA of an organism is genetically modified, the resulting organism is called a "knockdown organism." If the change in gene expression is caused by an oligonucleotide binding to an mRNA or temporarily binding to a gene, this leads to a temporary change in gene expression that does not modify the chromosomal DNA, and the result is referred to as a "transient knockdown".
In a transient knockdown, the binding of this oligonucleotide to the active gene or its transcripts causes decreased expression through a variety of processes. Binding can occur either through the blocking of transcription (in the case of gene-binding), the degradation of the mRNA transcript (e.g. by small interfering RNA (siRNA)) or RNase-H dependent antisense, or through the blocking of either mRNA translation, pre-mRNA splicing sites, or nuclease cleavage sites used for maturation of other functional RNAs, including miRNA (e.g. by morpholino oligos or other RNase-H independent antisense).
The most direct use of transient knockdowns is for learning about a gene that has been sequenced, but has an unknown or incompletely known function. This experimental approach is known as reverse genetics. Researchers draw inferences from how the knockdown differs from individuals in which the gene of interest is operational. Transient knockdowns are often used in developmental biology because oligos can be injected into single-celled zygotes and will be present in the daughter cells of the injected cell through embryonic development. The term gene knockdown first appeared in the literature in 1994
RNA interference
RNA interference (RNAi) is a means of silencing genes by way of mRNA degradation. Gene knockdown by this method is achieved by introducing small double-stranded interfering RNAs (siRNA) into the cytoplasm. Small interfering RNAs can originate from inside the cell or can be exogenously introduced into the cell. Once introduced into the cell, exogenous siRNAs are processed by the RNA-induced silencing complex (RISC). The siRNA is complementary to the target mRNA to be silenced, and the RISC uses the siRNA as a template for locating the target mRNA. After the RISC localizes to the target mRNA, the RNA is cleaved by a ribonuclease.
RNAi is widely used as a laboratory technique for genetic functional analysis. RNAi in organisms such as C. elegans and Drosophila melanogaster provides a quick and inexpensive means of investigating gene function. In C. elegans research, the availability of tools such as the Ahringer RNAi Library give laboratories a way of testing many genes in a variety of experimental backgrounds. Insights gained from experimental RNAi use may be useful in identifying potential therapeutic targets, drug development, or other applications. RNA interference is a very useful research tool, allowing investigators to carry out large genetic screens in an effort to identify targets for further research related to a particular pathway, drug, or phenotype.
CRISPRs
A different means of silencing exogenous DNA that has been discovered in prokaryotes is a mechanism involving loci called 'Clustered Regularly Interspaced Short Palindromic Repeats', or CRISPRs. CRISPR-associated (cas) genes encode cellular machinery that cuts exogenous DNA into small fragments and inserts them into a CRISPR repeat locus. When this CRISPR region of DNA is expressed by the cell, the small RNAs produced from the exogenous DNA inserts serve as a template sequence that other Cas proteins use to silence this same exogenous sequence. The transcripts of the short exogenous sequences are used as a guide to silence these foreign DNA when they are present in the cell. This serves as a kind of acquired immunity, and this process is like a prokaryotic RNA interference mechanism. The CRISPR repeats are conserved amongst many species and have been demonstrated to be usable in human cells, bacteria, C. elegans, zebrafish, and other organisms for effective genome manipulation. The use of CRISPRs as a versatile research tool can be illustrated by many studies making use of it to generate organisms with genome alterations.
TALENs
Another technology made possible by prokaryotic genome manipulation is the use of transcription activator-like effector nucleases (TALENs) to target specific genes. TALENs are nucleases that have two important functional components: a DNA binding domain and a DNA cleaving domain. The DNA binding domain is a sequence-specific transcription activator-like effector sequence while the DNA cleaving domain originates from a bacterial endonuclease and is non-specific. TALENs can be designed to cleave a sequence specified by the sequence of the transcription activator-like effector portion of the construct. Once designed, a TALEN is introduced into a cell as a plasmid or mRNA. The TALEN is expressed, localizes to its target sequence, and cleaves a specific site. After cleavage of the target DNA sequence by the TALEN, the cell uses non-homologous end joining as a DNA repair mechanism to correct the cleavage. The cell's attempt at repairing the cleaved sequence can render the encoded protein non-functional, as this repair mechanism introduces insertion or deletion errors at the repaired site.
Commercialization
So far, knockdown organisms with permanent alterations in their DNA have been engineered chiefly for research purposes. Also known simply as knockdowns, these organisms are most commonly used for reverse genetics, especially in species such as mice or rats for which transient knockdown technologies cannot easily be applied.
There are several companies that offer commercial services related to gene knockdown treatments.
See also
Gene knockout
References
External links
Genetically modified organisms
Genetics techniques | Gene knockdown | Engineering,Biology | 1,276 |
42,683,170 | https://en.wikipedia.org/wiki/Play%20drive | Play drive is a philosophical concept developed by Friedrich Schiller. It is a conjoining, through contradiction, of the human experience of the infinite and finite, of freedom and time, of sense and reason, and of life and form.
The object of the play drive is the living form. In contemplation of the beautiful, it allows man and woman to become most human.
To understand how Schiller reaches this conclusion, one must trace the origins of life and form, as a function of the two drives that the play drive mediates:
the form drive and
the sense drive.
These two drives are themselves functions of a human being's person and condition, which Schiller initially describes in terms of the absolute and time.
Schiller's view of the human condition
Person
In Schiller's thought, the sense and the form drive arise out of a human being's existence as a "person", which endures, and their "condition", the determining attributes that change. He describes the person as unchangeable and eternal and endures change. "We pass from rest to activity, from passion to indifference, from agreement to contradiction; but we remain, and what proceeds directly from us remains too". This personhood is grounded in itself, and not in the contradictory state of condition.
Schiller argues that because humans are finite, condition and person have to be separate, and cannot be grounded in each other. If they were, either change would persist or the person would change. "And so we would, in the first place, have the idea of absolute being grounded upon itself, that is to say freedom". Therefore, a person is grounded in itself, and this grounding is responsible for human's idea of freedom. Freedom, defined as an absolute being grounded in itself.
Condition
While the person can be grounded in itself, Schiller's concept of condition cannot. It is already established that condition cannot be grounded in person, and must therefore proceed from something else. This "proceeding" grounds the condition in contingency, which is humanity's experience of time. "For man is not just a person situated in a particular condition. Every condition, however, every determinate existence, has its origins in time; and so man, as a phenomenal being, must also have a beginning, although the pure intelligence within him is eternal".
Humanity receives reality from the outside, as something changing within time. This changing perception is accompanied by the eternal "I" – the person – which organizes the change and variety into a unity. "The reality which the supreme intelligence creates out of itself, man has first to receive, and he does in fact receive it, by way of perception, as something existing outside of him in space, and as something changing within him in time". The perfect human being, according to Schiller, would be a constant unity amongst constant change. These seemingly contradictory forces of freedom through person and time through condition, manifest themselves in humans as the form and the sense drive. These drives, and consequently humanity's experience of freedom and time, are mediated by the play drive.
Sense and Form Drives
Sense Drive
The sense drive, in Schiller's thought, is a function of man's condition. It comes from the human being's physical existence, and the whole of their phenomenal existence stems from it. In their sensuous existence, human beings are set within the limits of time, within his condition, and becomes matter. "By matter in this context we understand nothing more than change, or reality that occupies time. Consequently this drive demands that there shall be change, that time shall have content". Therefore, sensation of the sensuous drive is time occupied by content.
Form Drive
The form drive, in Schiller's view, is a function of the person grounded in itself. This drive is humanity's rational nature, their "absolute existence", and its goal is to give them freedom, so they can bring harmony to the variety of things in the world. Because the form drive insists on the absolute, "It wants the real to be necessary and eternal, and the eternal and necessary to be real. In other words, it insist on truth and on the right". The sense drive and the form drive are in competition, and overpower one another in the person.
In competition
If the sense drive overcomes the form drive, according to Schiller, it reduces human beings to matter, but leaves them without the ability to bring this matter into unity. "As long as he merely feels, merely desires and acts upon desire, he is as yet nothing but world, if by this term we understand nothing but the formless content of time" (117). In order to not be just "world", people must exercise their form drive upon matter, and "give reality the predisposition he carries within him". When the form drive overpowers, Schiller says we experience "the greatest enlargement of being", meaning that because it is a drive toward the absolute, all limitations disappear, and instead of seeing the world finitely, as he does through the sensuous drive, "man has raised himself to a unity of ideas embracing the whole realm of phenomena".
Since the sense drive places us in time, indulging in the formal drive removes us out of time, and in doing so "We are no longer individuals; we are species". While this seems like a perfected state, it is only a point on the path of a human being reaching their maximum potential.
In equilibrium
To maximize potential of the two drives, Schiller argues, one cannot suffocate or limit the other. The perfection of the sense drive would consist in maximizing changeability and maximizing extensity. This is the development of receptivity, through which human beings present more "surface" to phenomena of the world. "The more facets his receptivity develops, the more labile it is, and the more surface it presents to phenomena, so much more world does man apprehend, and all the more potentialities does he develop in himself".
To suppress this faculty would not achieve the perfection of the form drive, rather the opposite. The perfection of the form drive is accomplished in its ability to oppose the sense drive through its endurance to change. "The more power and depth the personality achieves, and the more freedom reason attains, so much more world does man comprehend, and all the more form does he create outside of himself". Therefore, the Form drive's autonomy and intensity are maximized as a response to the maximization of the sense drive. "Where both these aptitudes are conjoined, man will combine the greatest fullness of existence with the highest autonomy and freedom and instead of losing himself to the world, will rather draw the latter into himself in all its infinitude of phenomena, and subject it to the unity of his reason" This "conjoining" of the two faculties, are actually a mediation by the third fundamental drive, the play drive.
Play Drive
The play drive mediates the demands of the sense and the form drive. "The sense drive demands that there shall be change and that time shall have a content; the form drive demands that time shall be annulled and that there shall be no change. That drive, therefore, in which both others work in concert is the play drive, reconciling becoming with absolute being and change with identity".
For the play drive to successfully mediate the two drives, human beings must learn passivity, to exercise their sense drive and become receptive of the world. They must also learn activity, to free their reason, as much as possible from the receptive. Accomplishing both, human beings are able to have a twofold experience simultaneously, "in which he were to be at once conscious of his freedom and sensible of his existence, were at one and the same time, to feel himself matter and come to know himself as mind".
Therefore, by maximizing the constraint of the absolute and the contingency of the material, the play drive negates the demands of both drives and sets people free both physically and morally. To exist in this paradoxical state, would mean to have a "complete intuition of his human nature". Furthermore, "the object that afforded him this vision would become for him the symbol of his accomplished destiny" and this would serve him as a finite embodiment of the infinite. Schiller names this object of the play drive 'living form'.
Living Form
The living form comes from a mediation of the "objects" of the sense and the form drives. The object of the sense drive, Schiller calls life. This concept designates all material being and all that is immediately present to the senses. It is a function of the human condition.
The object of the form drive, Schiller simply calls form. This concept includes all the formal qualities of things and their relationship to our reason. Schiller argues for the necessity of the interplay of the two objects in creation of a sculpture from a block of marble. "As long as we merely think about his form, it is lifeless, a mere abstraction; as long as we merely feel his life, it is formless, a mere expression. Only when his form lives in our feeling and his life takes on form in our understanding, does he become a living form". This experience of the living form, as a mediation of life and form by the play drive, is what Schiller calls the experience of beauty. "Beauty results from the reciprocal action of two opposed drives and from the uniting of two opposed principles. The highest ideal of beauty, therefore, to be sought in the most perfect possible union and equilibrium of reality and form". Therefore, in contemplation of the beautiful, people are exercising the play drive, and are fully human.
See also
G. Stanley Hall
References
Sources
Friedrich Schiller
Play (activity)
Concepts in aesthetics | Play drive | Biology | 2,065 |
53,869,659 | https://en.wikipedia.org/wiki/Injection%20of%20vinylite%20and%20corrosion | Injection of vinylite and corrosion is an anatomical technique used to visualize branching and pathways of the circulatory system. It consists of filling the circulatory system of the piece with vinyl acetate and its use of corrosion technique for the removal of the superposed matter, that is, the organic matter. The technique of vinylite followed by corrosion, besides having low cost, provides a long period of conservation, satisfying the need of undergraduate students as the study of anatomy.
The technique of filling by vinilite is considered an angiotechnical, which consists of the study of blood vessels. This is used to mark the circulatory system (arterial and venous) with the use of pre-pigmented vinyl acetate to fill the vessels of the part to be studied in order to be able to visualize the ducts and duly filled systems. For corrosion or semi-corrosion, hydrochloric acid is the most viable substance used to obtain templates for the vascularization of organs or parts.
Gallery
References
Anatomy
Anatomical preservation | Injection of vinylite and corrosion | Biology | 214 |
15,068,921 | https://en.wikipedia.org/wiki/Ktetor | Ktetor () or ktitor (; ; ), meaning 'founder', is a title given in the Middle Ages to the provider of funds for construction or reconstruction of an Eastern Orthodox church or monastery, for the addition of icons, frescos, and other works of art. It was used in the Byzantine sphere. A Catholic equivalent of the term is donator. At the time of founding, the ktetor often issued typika, and was illustrated on frescoes ("ktetor portrait"). The female form is () or ktitoritsa ().
Sources
History of Eastern Orthodoxy
Philanthropy
Culture of the Byzantine Empire
Greek words and phrases | Ktetor | Biology | 138 |
57,204,113 | https://en.wikipedia.org/wiki/NGC%203794 | NGC 3794, also cataloged in the New General Catalogue as NGC 3804, is a low-surface-brightness galaxy in the constellation Ursa Major. It is very far from Earth, with a distance of about . It was discovered on April 14, 1789, by the astronomer William Herschel.
References
External links
Intermediate spiral galaxies
Ursa Major
Low surface brightness galaxies
3794
036238 | NGC 3794 | Astronomy | 84 |
10,498,281 | https://en.wikipedia.org/wiki/NGC%205822 | NGC 5822 is an open cluster of stars in the southern constellation of Lupus. It was discovered by English Astronomer John Herschel on July 3, 1836, and lies close to another cluster, NGC 5823, which suggests there may be a physical association.
NGC 5822 is an intermediate age cluster, estimated at around 900 million years old, and it is located nearby at a distance of 2,700 light years. The Trumpler class of this cluster is III 2m. It is richly populated with half the cluster members lying within an angular radius of . The cluster is considered low mass at ~1,700 times the mass of the Sun. It has a core radius of and a limiting radius of .
Measuring the abundances of a set of F-type stars that are probable members demonstrates the cluster metallicity is very similar to the Sun. It displays an extended main sequence turnoff on the Hertzsprung–Russell diagram, most likely due to differences in stellar rotation. Two barium stars have been identified in NGC 5822, making it only the second cluster shown to host these objects as of 2013.
Gallery
References
External links
Open clusters
Lupus (constellation)
5822 | NGC 5822 | Astronomy | 240 |
24,651,576 | https://en.wikipedia.org/wiki/Event%20%28relativity%29 | In relativity, an event is anything that happens that has a specific time and place in spacetime. For example, a glass breaking on the floor is an event; it occurs at a unique place and a unique time. Strictly speaking, the notion of an event is an idealization, in the sense that it specifies a definite time and place, whereas any actual event is bound to have a finite extent, both in time and in space.
An event in the universe is caused by the set of events in its causal past. An event contributes to the occurrence of events in its causal future.
Upon choosing a frame of reference, one can assign coordinates to the event: three spatial coordinates to describe the location and one time coordinate to specify the moment at which the event occurs. These four coordinates together form a four-vector associated to the event.
One of the goals of relativity is to specify the possibility of one event influencing another. This is done by means of the metric tensor, which allows for determining the causal structure of spacetime. The difference (or interval) between two events can be classified into spacelike, lightlike and timelike separations. Only if two events are separated by a lightlike or timelike interval can one influence the other.
Uncertainty principle
The concept of an event in relativity as a point in spacetime with arbitrarily high precision size breaks down when considering the uncertainty principle, which stipulates that there is a minimum size or accuracy for measurements made in the universe, and you cannot have arbitrary precision in measurements. This has practical effect for example, next to a black hole. Everywhere in spacetime, there are virtual (A.K.A. unmeasurable) particle-antiparticle pairs which spontaneously appear and then disappear, due to the uncertainty principle. Directly next to a black hole's event horizon, one of the elements of a virtual particle-antiparticle pair (either the particle or the antiparticle) gets sucked into the black hole, leaving the other to be emitted out into spacetime; this is the source of Hawking radiation.
P. W. Bridgman found the event concept insufficient for operational physics in his book The Logic of Modern Physics.
See also
Relativity of simultaneity
References
zh-yue:事件 (相對論)
Theory of relativity | Event (relativity) | Physics,Mathematics | 474 |
47,182,594 | https://en.wikipedia.org/wiki/Billion | Billion is a word for a large number, and it has two distinct definitions:
1,000,000,000, i.e. one thousand million, or (ten to the ninth power), as defined on the short scale. This is now the most common sense of the word in all varieties of English; it has long been established in American English and has since become common in Britain and other English-speaking countries as well.
1,000,000,000,000, i.e. one million million, or (ten to the twelfth power), as defined on the long scale. This number is the historical sense of the word and remains the established sense of the word in other European languages. Though displaced by the short scale definition relatively early in US English, it remained the most common sense of the word in Britain until the 1950s and still remains in occasional use there.
American English adopted the short scale definition from the French (it enjoyed usage in France at the time, alongside the long-scale definition). The United Kingdom used the long scale billion until 1974, when the government officially switched to the short scale, but since the 1950s the short scale had already been increasingly used in technical writing and journalism. Moreover even in 1941, Churchill remarked "For all practical financial purposes a billion represents one thousand millions...".
Other countries use the word billion (or words cognate to it) to denote either the long scale or short scale billion.
Milliard, another term for one thousand million, is extremely rare in English, but words similar to it are very common in other European languages. For example, Bulgarian, Catalan, Czech, Danish, Dutch, Finnish, French, Georgian, German, Hebrew (Asia), Hungarian, Italian, Kazakh, Kyrgyz, Kurdish, Lithuanian, Luxembourgish, Norwegian, Persian, Polish, Portuguese (although the expression mil milhões — a thousand million — is far more common), Romanian, Russian, Serbo-Croatian, Slovak, Slovene, Spanish (although the expression mil millones — a thousand million — is far more common), Swedish, Tajik, Turkish, Ukrainian and Uzbek — use milliard, or a related word, for the short scale billion, and billion (or a related word) for the long scale billion. Thus for these languages billion is a thousand times as large as the modern English billion.
History
According to the Oxford English Dictionary, the word billion was formed in the 16th century (from million and the prefix bi-, "two"), meaning the second power of a million (1,000,0002 = ). This long scale definition was similarly applied to trillion, quadrillion and so on. The words were originally Latin, and entered English around the end of the 17th century. Later, French arithmeticians changed the words' meanings, adopting the short scale definition whereby three zeros rather than six were added at each step, so a billion came to denote a thousand million (), a trillion became a million million (), and so on. This new convention was adopted in the United States in the 19th century, but Britain retained the original long scale use. France, in turn, reverted to the long scale in 1948.
In Britain, however, under the influence of American usage, the short scale came to be increasingly used. In 1974, Prime Minister Harold Wilson confirmed that the government would use the word billion only in its short scale meaning (one thousand million). In a written answer to Robin Maxwell-Hyslop MP, who asked whether official usage would conform to the traditional British meaning of a million million, Wilson stated: "No. The word 'billion' is now used internationally to mean 1,000 million and it would be confusing if British Ministers were to use it in any other sense. I accept that it could still be interpreted in this country as 1 million million and I shall ask my colleagues to ensure that, if they do use it, there should be no ambiguity as to its meaning."
See also
Names of large numbers
References
Large numbers | Billion | Mathematics | 828 |
22,409 | https://en.wikipedia.org/wiki/OS/2 | OS/2 is a proprietary computer operating system for x86 and PowerPC based personal computers. It was created and initially developed jointly by IBM and Microsoft, under the leadership of IBM software designer Ed Iacobucci, intended as a replacement for DOS. The first version was released in 1987. A feud between the two companies beginning in 1990 led to Microsoft’s leaving development solely to IBM, which continued development on its own. OS/2 Warp 4 in 1996 was the last major upgrade, after which IBM slowly halted the product as it failed to compete against Microsoft's Windows; updated versions of OS/2 were released by IBM until 2001.
The name stands for "Operating System/2", because it was introduced as part of the same generation change release as IBM's "Personal System/2 (PS/2)" line of second-generation PCs. OS/2 was intended as a protected-mode successor of PC DOS targeting the Intel 80286 processor. Notably, basic system calls were modelled after MS-DOS calls; their names even started with "Dos" and it was possible to create "Family Mode" applications – text mode applications that could work on both systems. Because of this heritage, OS/2 shares similarities with Unix, Xenix, and Windows NT. OS/2 sales were largely concentrated in networked computing used by corporate professionals.
OS/2 2.0 was released in 1992 as the first 32-bit version as well as the first to be entirely developed by IBM, after Microsoft severed ties over a dispute over how to position OS/2 relative to Microsoft's new Windows 3.1 operating environment. With OS/2 Warp 3 in 1994, IBM attempted to also target home consumers through a multi-million dollar advertising campaign. However it continued to struggle in the marketplace, partly due to strategic business measures imposed by Microsoft in the industry that have been considered anti-competitive. Following the failure of IBM's Workplace OS project, OS/2 Warp 4 became the final major release in 1996; IBM discontinued its support for OS/2 on December 31, 2006. Since then, OS/2 has been developed, supported and sold by two different third-party vendors under license from IBM – first by Serenity Systems as eComStation from 2001 to 2011, and later by Arca Noae LLC as ArcaOS since 2017.
Development
1985–1990: Joint IBM–Microsoft development
The development of OS/2 began when IBM and Microsoft signed the "Joint Development Agreement" in August 1985. It was code-named "CP/DOS" and it took two years for the first product to be delivered.
OS/2 1.0 (1987)
OS/2 1.0 was announced in April 1987 and released in December. The original release only ran in text mode, and a GUI was introduced with OS/2 1.1 about a year later. OS/2 features an API for controlling the video display (VIO) and handling keyboard and mouse events so that programmers writing for protected mode need not call the BIOS or access hardware directly. Other development tools included a subset of the video and keyboard APIs as linkable libraries so that family mode programs are able to run under MS-DOS, and, in the OS/2 Extended Edition v1.0, a database engine called Database Manager or DBM (this was related to DB2, and should not be confused with the DBM family of database engines for Unix and Unix-like operating systems). A task-switcher named Program Selector was available through the Ctrl-Esc hotkey combination, allowing the user to select among multitasked text-mode sessions (or screen groups; each can run multiple programs).
Communications and database-oriented extensions were delivered in 1988, as part of OS/2 1.0 Extended Edition: SNA, X.25/APPC/LU 6.2, LAN Manager, Query Manager, SQL.
OS/2 1.1 (1988)
The promised user interface, Presentation Manager, was introduced with OS/2 1.1 in October 1988. It had a similar user interface to Windows 2.1, which was released in May of that year. (The interface was replaced in versions 1.2 and 1.3 by a look closer in appearance to Windows 3.0.)
The Extended Edition of 1.1, sold only through IBM sales channels, introduced distributed database support to IBM database systems and SNA communications support to IBM mainframe networks.
OS/2 1.2 (1989)
In 1989, Version 1.2 introduced Installable Filesystems and, notably, the HPFS filesystem. HPFS provided a number of improvements over the older FAT file system, including long filenames and a form of alternate data streams called Extended Attributes. In addition, extended attributes were also added to the FAT file system.
The Extended Edition of 1.2 introduced TCP/IP and Ethernet support.
OS/2- and Windows-related books of the late 1980s acknowledged the existence of both systems and promoted OS/2 as the system of the future.
1990: Breakup
OS/2 1.3 (1990)
The collaboration between IBM and Microsoft unravelled in 1990, between the releases of Windows 3.0 and OS/2 1.3. During this time, Windows 3.0 became a tremendous success, selling millions of copies in its first year. Much of its success was because Windows 3.0 (along with MS-DOS) was bundled with most new computers. OS/2, on the other hand, was available only as an additional stand-alone software package. In addition, OS/2 lacked device drivers for many common devices such as printers, particularly non-IBM hardware. Windows, on the other hand, supported a much larger variety of hardware. The increasing popularity of Windows prompted Microsoft to shift its development focus from cooperating on OS/2 with IBM to building its own business based on Windows.
Several technical and practical reasons contributed to this breakup. The two companies had significant differences in culture and vision. Microsoft favored the open hardware system approach that contributed to its success on the PC. IBM sought to use OS/2 to drive sales of its own hardware, and urged Microsoft to drop features, such as fonts, that IBM's hardware did not support. Microsoft programmers also became frustrated with IBM's bureaucracy and its use of lines of code to measure programmer productivity. IBM developers complained about the terseness and lack of comments in Microsoft's code, while Microsoft developers complained that IBM's code was bloated.
The two products have significant differences in API. OS/2 was announced when Windows 2.0 was near completion, and the Windows API already defined. However, IBM requested that this API be significantly changed for OS/2. Therefore, issues surrounding application compatibility appeared immediately. OS/2 designers hoped for source code conversion tools, allowing complete migration of Windows application source code to OS/2 at some point. However, OS/2 1.x did not gain enough momentum to allow vendors to avoid developing for both OS/2 and Windows in parallel.
OS/2 1.x targets the Intel 80286 processor and DOS fundamentally does not. IBM insisted on supporting the 80286 processor, with its 16-bit segmented memory mode, because of commitments made to customers who had purchased many 80286-based PS/2s as a result of IBM's promises surrounding OS/2. Until release 2.0 in April 1992, OS/2 ran in 16-bit protected mode and therefore could not benefit from the Intel 80386's much simpler 32-bit flat memory model and virtual 8086 mode features. This was especially painful in providing support for DOS applications. While, in 1988, Windows/386 2.1 could run several cooperatively multitasked DOS applications, including expanded memory (EMS) emulation, OS/2 1.3, released in 1991, was still limited to one "DOS box".
Given these issues, Microsoft started to work in parallel on a version of Windows which was more future-oriented and more portable. The hiring of Dave Cutler, former VAX/VMS architect, in 1988 created an immediate competition with the OS/2 team, as Cutler did not think much of the OS/2 technology and wanted to build on his work on the MICA project at Digital rather than creating a "DOS plus". His NT OS/2 was a completely new architecture.
IBM grew concerned about the delays in development of OS/2 2.0. Initially, the companies agreed that IBM would take over maintenance of OS/2 1.0 and development of OS/2 2.0, while Microsoft would continue development of OS/2 3.0. In the end, Microsoft decided to recast NT OS/2 3.0 as Windows NT, leaving all future OS/2 development to IBM. From a business perspective, it was logical to concentrate on a consumer line of operating systems based on DOS and Windows, and to prepare a new high-end system in such a way as to keep good compatibility with existing Windows applications. While it waited for this new high-end system to develop, Microsoft would still receive licensing money from Xenix and OS/2 sales. Windows NT's OS/2 heritage can be seen in its initial support for the HPFS filesystem, text mode OS/2 1.x applications, and OS/2 LAN Manager network support. Some early NT materials even included OS/2 copyright notices embedded in the software.
One example of NT OS/2 1.x support is in the WIN2K resource kit. Windows NT could also support OS/2 1.x Presentation Manager and AVIO applications with the addition of the Windows NT Add-On Subsystem for Presentation Manager.
1990–1996: Post-breakup
OS/2 2.0 and DOS compatibility (1992)
OS/2 2.0 was released in April 1992. At the time, the suggested retail price was , while Windows retailed for .
OS/2 2.0 provided a 32-bit API for native programs, though the OS itself still contained some 16-bit code and drivers. It also included a new OOUI (object-oriented user interface) called the Workplace Shell. This was a fully object-oriented interface that was a significant departure from the previous GUI. Rather than merely providing an environment for program windows (such as the Program Manager), the Workplace Shell provided an environment in which the user could manage programs, files and devices by manipulating objects on the screen. With the Workplace Shell, everything in the system is an "object" to be manipulated.
OS/2 2.0 was touted by IBM as "a better DOS than DOS and a better Windows than Windows". It managed this by including the fully-licensed MS-DOS 5.0, which had been patched and improved upon. For the first time, OS/2 was able to run more than one DOS application at a time. This was so effective that it allowed OS/2 to run a modified copy of Windows 3.0, itself a DOS extender, including Windows 3.0 applications.
Because of the limitations of the Intel 80286 processor, OS/2 1.x could run only one DOS program at a time, and did this in a way that allowed the DOS program to have total control over the computer. A problem in DOS mode could crash the entire computer. In contrast, OS/2 2.0 could leverage the virtual 8086 mode of the Intel 80386 processor to create a much safer virtual machine in which to run DOS programs. This included an extensive set of configuration options to optimize the performance and capabilities given to each DOS program. Any real-mode operating system (such as 8086 Xenix) could also be made to run using OS/2's virtual machine capabilities, subject to certain direct hardware access limitations.
Like most 32-bit environments, OS/2 could not run protected-mode DOS programs using the older VCPI interface, unlike the Standard mode of Windows 3.1; it only supported programs written according to DPMI. (Microsoft discouraged the use of VCPI under Windows 3.1, however, due to performance degradation.)
Unlike Windows NT, OS/2 always allowed DOS programs the possibility of masking real hardware interrupts, so any DOS program could deadlock the machine in this way. OS/2 could, however, use a hardware watchdog on selected machines (notably IBM machines) to break out of such a deadlock. Later, release 3.0 leveraged the enhancements of newer Intel 80486 and Intel Pentium processors—the Virtual Interrupt Flag (VIF), which was part of the Virtual Mode Extensions (VME)—to solve this problem.
OS/2 2.1 and Windows compatibility (1993)
OS/2 2.1 was released in 1993. This version of OS/2 achieved compatibility with Windows 3.0 (and later Windows 3.1) by adapting Windows user-mode code components to run inside a virtual DOS machine (VDM). Originally, a nearly complete version of Windows code was included with OS/2 itself: Windows 3.0 in OS/2 2.0, and Windows 3.1 in OS/2 2.1. Later, IBM developed versions of OS/2 that would use whatever Windows version the user had installed previously, patching it on the fly, and sparing the cost of an additional Windows license. It could either run full-screen, using its own set of video drivers, or "seamlessly," where Windows programs would appear directly on the OS/2 desktop. The process containing Windows was given fairly extensive access to hardware, especially video, and the result was that switching between a full-screen WinOS/2 session and the Workplace Shell could occasionally cause issues.
Because OS/2 only runs the user-mode system components of Windows, it is incompatible with Windows device drivers (VxDs) and applications that require them.
Multiple Windows applications run by default in a single Windows session – multitasking cooperatively and without memory protection – just as they would under native Windows 3.x. However, to achieve true isolation between Windows 3.x programs, OS/2 can also run multiple copies of Windows in parallel, with each copy residing in a separate VDM. The user can then optionally place each program either in its own Windows session – with preemptive multitasking and full memory protection between sessions, though not within them – or allow some applications to run together cooperatively in a shared Windows session while isolating other applications in one or more separate Windows sessions. At the cost of additional hardware resources, this approach can protect each program in any given Windows session (and each instance of Windows itself) from every other program running in any separate Windows session (though not from other programs running in the same Windows session).
Whether Windows applications are running in full-screen or windowed mode, and in one Windows session or several, it is possible to use DDE between OS/2 and Windows applications, and OLE between Windows applications only.
IBM's OS/2 for Windows product (codename Ferengi), also known as "OS/2, Special Edition", was interpreted as a deliberate strategy "of cashing in on the pervasive success of the Microsoft platform" but risked confusing consumers with the notion that the product was a mere accessory or utility running on Windows such as Norton Desktop for Windows when, in fact, it was "a complete, modern, multi-tasking, pre-emptive operating system", itself hosting Windows instead of running on it. Available on CD-ROM or 18 floppy disks, the product documentation reportedly suggested Windows as a prerequisite for installing the product, also being confined to its original FAT partition, whereas the product apparently supported the later installation of Windows running from an HPFS partition, particularly beneficial for users of larger hard drives. Windows compatibility, relying on patching specific memory locations, was reportedly broken by the release of Windows 3.11, prompting accusations of arbitrary changes to Windows in order to perpetrate "a deliberate act of Microsoft sabotage" against IBM's product.
OS/2 Warp 3 (1994)
Released in 1994, OS/2 version 3.0 was labelled as OS/2 Warp to highlight the new performance benefits, and generally to freshen the product image. "Warp" had originally been the internal IBM name for the release: IBM claimed that it had used Star Trek terms as internal names for prior OS/2 releases, and that this one seemed appropriate for external use as well. At the launch of OS/2 Warp in 1994, Patrick Stewart was to be the Master of Ceremonies; however Kate Mulgrew of the then-upcoming series Star Trek: Voyager substituted for him at the last minute.
OS/2 Warp offers a host of benefits over OS/2 2.1, notably broader hardware support, greater multimedia capabilities, Internet-compatible networking, and it includes a basic office application suite known as IBM Works. It was released in two versions: the less expensive "Red Spine" and the more expensive "Blue Spine" (named for the color of their boxes). "Red Spine" was designed to support Microsoft Windows applications by utilizing any existing installation of Windows on the computer's hard drive. "Blue Spine" includes Windows support in its own installation, and so can support Windows applications without a Windows installation. As most computers were sold with Microsoft Windows pre-installed and the price was less, "Red Spine" was the more popular product. OS/2 Warp Connect—which has full LAN client support built-in—followed in mid-1995. Warp Connect was nicknamed "Grape".
In OS/2 2.0, most performance-sensitive subsystems, including the graphics (Gre) and multimedia (MMPM/2) systems, were updated to 32-bit code in a fixpack, and included as part of OS/2 2.1. Warp 3 brought about a fully 32-bit windowing system, while Warp 4 introduced the object-oriented 32-bit GRADD display driver model.
Workplace OS (1995)
In 1991, IBM started development on an intended replacement for OS/2 called Workplace OS. This was an entirely new product, brand new code, that borrowed only a few sections of code from both the existing OS/2 and AIX products. It used an entirely new microkernel code base, intended (eventually) to host several of IBM's operating systems (including OS/2) as microkernel "personalities". It also included major new architectural features including a system registry, JFS, support for UNIX graphics libraries, and a new driver model.
Workplace OS was developed solely for POWER platforms, and IBM intended to market a full line of PowerPCs in an effort to take over the market from Intel. A mission was formed to create prototypes of these machines and they were disclosed to several corporate customers, all of whom raised issues with the idea of dropping Intel.
Advanced plans for the new code base would eventually include replacement of the OS/400 operating system by Workplace OS, as well as a microkernel product that would have been used in industries such as telecommunications and set-top television receivers.
A partially functional pre-alpha version of Workplace OS was demonstrated at Comdex, where a bemused Bill Gates stopped by the booth. The second and last time it would be shown in public was at an OS/2 user group in Phoenix, Arizona; the pre-alpha code refused to boot.
It was released in 1995. But with $990 million being spent per year on development of this as well as Workplace OS, and no possible profit or widespread adoption, the end of the entire Workplace OS and OS/2 product line was near.
OS/2 Warp 4 (1996)
In 1996, Warp 4 added Java and speech recognition software. IBM also released server editions of Warp 3 and Warp 4 which bundled IBM's LAN Server product directly into the operating system installation. A personal version of Lotus Notes was also included, with a number of template databases for contact management, brainstorming, and so forth. The UK-distributed free demo CD-ROM of OS/2 Warp essentially contained the entire OS and was easily, even accidentally, cracked, meaning that even people who liked it did not have to buy it. This was seen as a backdoor tactic to increase the number of OS/2 users, in the belief that this would increase sales and demand for third-party applications, and thus strengthen OS/2's desktop numbers. This suggestion was bolstered by the fact that this demo version had replaced another which was not so easily cracked, but which had been released with trial versions of various applications. In 2000, the July edition of Australian Personal Computer magazine bundled software CD-ROMs, included a full version of Warp 4 that required no activation and was essentially a free release. Special versions of OS/2 2.11 and Warp 4 also included symmetric multiprocessing (SMP) support.
OS/2 sales were largely concentrated in networked computing used by corporate professionals; however, by the early 1990s, it was overtaken by Microsoft Windows NT. While OS/2 was arguably technically superior to Microsoft Windows 95, OS/2 failed to develop much penetration in the consumer and stand-alone desktop PC segments; there were reports that it could not be installed properly on IBM's own Aptiva series of home PCs. Microsoft made an offer in 1994 where IBM would receive the same terms as Compaq (the largest PC manufacturer at the time) for a license of Windows 95, if IBM ended development of OS/2 completely. IBM refused and instead went with an "IBM First" strategy of promoting OS/2 Warp and disparaging Windows, as IBM aimed to drive sales of its own software as well as hardware. By 1995, Windows 95 negotiations between IBM and Microsoft, which were already difficult, stalled when IBM purchased Lotus SmartSuite, which would have directly competed with Microsoft Office. As a result of the dispute, IBM signed the license agreement 15 minutes before Microsoft's Windows 95 launch event, which was later than their competitors and this badly hurt sales of IBM PCs. IBM officials later conceded that OS/2 would not have been a viable operating system to keep them in the PC business.
1996–2001: Downsizing
A project was launched internally by IBM to evaluate the looming competitive situation with Microsoft Windows 95. Primary concerns included the major code quality issues in the existing OS/2 product (resulting in over 20 service packs, each requiring more diskettes than the original installation), and the ineffective and heavily matrixed development organization in Boca Raton (where the consultants reported that "basically, everybody reports to everybody") and Austin.
That study, tightly classified as "Registered Confidential" and printed only in numbered copies, identified untenable weaknesses and failures across the board in the Personal Systems Division as well as across IBM as a whole. This resulted in a decision being made at a level above the Division to cut over 95% of the overall budget for the entire product line, end all new development (including Workplace OS), eliminate the Boca Raton development lab, end all sales and marketing efforts of the product, and lay off over 1,300 development individuals (as well as sales and support personnel). $990 million had been spent in the last full year. Warp 4 became the last distributed version of OS/2.
2001–2006: Discontinuation and end-of-life
Although a small and dedicated community remains faithful to OS/2, OS/2 failed to catch on in the mass market and is little used outside certain niches where IBM traditionally had a stronghold. For example, many bank installations, especially automated teller machines, run OS/2 with a customized user interface; French SNCF national railways used OS/2 1.x in thousands of ticket selling machines. Telecom companies such as Nortel used OS/2 in some voicemail systems. Also, OS/2 was used for the host PC used to control the Satellite Operations Support System equipment installed at NPR member stations from 1994 to 2007, and used to receive the network's programming via satellite.
Although IBM began indicating shortly after the release of Warp 4 that OS/2 would eventually be withdrawn, the company did not end support until December 31, 2006, with sales of OS/2 stopping on December 23, 2005. The latest IBM OS/2 Warp version is 4.52, which was released for both desktop and server systems in December 2001.
IBM is still delivering defect support for a fee. IBM urges customers to migrate their often highly complex applications to e-business technologies such as Java in a platform-neutral manner. Once application migration is completed, IBM recommends migration to a different operating system, suggesting Linux as an alternative.
2001–present: Third-party development
After IBM discontinued development of OS/2, various third parties approached IBM to take over future development of the operating system. The OS/2 software vendor Stardock made such a proposal to IBM in 1999, but it was not followed through by the company. Serenity Systems succeeded in negotiating an agreement with IBM, and began reselling OS/2 as eComStation in 2001. eComStation is now sold by XEU.com, the most recent version (2.1) was released in 2011. In 2015, Arca Noae, LLC announced that they had secured an agreement with IBM to resell OS/2. They released the first version of their OS/2-based operating system in 2017 as ArcaOS. As of 2023, there have been multiple releases of ArcaOS, and it remains under active development.
Petitions for open source
Many people hoped that IBM would release OS/2 or a significant part of it as open source. Petitions were held in 2005 and 2007, but IBM refused them, citing legal and technical reasons. It is unlikely that the entire OS will be open at any point in the future because it contains third-party code to which IBM does not have copyright, and much of this code is from Microsoft. IBM also once engaged in a technology transfer with Commodore, licensing Amiga technology for OS/2 2.0 and above, in exchange for the REXX scripting language. This means that OS/2 may have some code that was not written by IBM, which can therefore prevent the OS from being re-announced as open-sourced in the future. On the other hand, IBM donated Object REXX for Windows and OS/2 to the Open Object REXX project maintained by the REXX Language Association on SourceForge.
There was a petition, arranged by OS2World, to open parts of the OS. Open source operating systems such as Linux have already profited from OS/2 indirectly through IBM's release of the improved JFS file system, which was ported from the OS/2 code base. As IBM didn't release the source of the OS/2 JFS driver, developers ported the Linux driver back to eComStation and added the functionality to boot from a JFS partition. This new JFS driver has been integrated into eComStation v2.0, and later into ArcaOS 5.0.
Summary of releases
Release dates refer to the US English editions unless otherwise noted.
Features and technology
User interface
The graphic system has a layer named Presentation Manager that manages windows, fonts, and icons. This is similar in functionality to a non-networked version of X11 or the Windows GDI. On top of this lies the Workplace Shell (WPS) introduced in OS/2 2.0. WPS is an object-oriented shell allowing the user to perform traditional computing tasks such as accessing files, printers, launching legacy programs, and advanced object oriented tasks using built-in and third-party application objects that extended the shell in an integrated fashion not available on any other mainstream operating system. WPS follows IBM's Common User Access user interface standards.
WPS represents objects such as disks, folders, files, program objects, and printers using the System Object Model (SOM), which allows code to be shared among applications, possibly written in different programming languages. A distributed version called DSOM allowed objects on different computers to communicate. DSOM is based on CORBA. The object oriented aspect of SOM is similar to, and a direct competitor to, Microsoft's Component Object Model, though it is implemented in a radically different manner; for instance, one of the most notable differences between SOM and COM is SOM's support for inheritance (one of the most fundamental concepts of OO programming)—COM does not have such support. SOM and DSOM are no longer being developed.
The multimedia capabilities of OS/2 are accessible through Media Control Interface commands.
The last update (bundled with the IBM version of Netscape Navigator plugins) added support for MPEG files. Support for newer formats such as PNG, progressive JPEG, DivX, Ogg, and MP3 comes from third parties. Sometimes it is integrated with the multimedia system, but in other offers it comes as standalone applications.
Commands
The following list of commands is supported by cmd.exe on OS/2.
ansi
append
assign
attrib
backup
boot
break
cache
call
cd
chcp
chdir
chkdsk
cls
cmd
codepage
command
comp
copy
createdd
date
ddinstal
debug
del
detach
dir
diskcomp
diskcopy
doskey
dpath
eautil
echo
endlocal
erase
exit
extproc
fdisk
fdiskpm
find
for
format
fsaccess
goto
graftabl
help
if
join
keyb
keys
label
makeini
md
mem
mkdir
mode
more
move
patch
path
pause
picview
pmrexx
print
prompt
pstat
rd
recover
rem
ren
rename
replace
restore
rmdir
set
setboot
setcom40
setlocal
share
shift
sort
spool
start
subst
syslevel
syslog
time
trace
tracebuf
tracefmt
tree
type
undelete
unpack
ver
verify
view
vmdisk
vol
xcopy
Networking
The TCP/IP stack is based on the open source BSD stack as visible with SCCS what compatible tools. IBM included tools such as ftp and telnet and even servers for both commands. IBM sold several networking extensions including NFS support and an X11 server.
Drivers
Hardware vendors were reluctant to support device drivers for alternative operating systems including OS/2, leaving users with few choices from a select few vendors. To relieve this issue for video cards, IBM licensed a reduced version of the Scitech display drivers, allowing users to choose from a wide selection of cards supported through Scitech's modular driver design.
Virtualization
OS/2 has historically been more difficult to run in a virtual machine than most other legacy x86 operating systems because of its extensive reliance on the full set of features of the x86 CPU; in particular, OS/2's use of ring 2 prevented it from running in early versions of VMware. Newer versions of VMware provide official support for OS/2, specifically for eComStation.
VirtualPC from Microsoft (originally Connectix) has been able to run OS/2 without hardware virtualization support for many years. It also provided "additions" code which greatly improves host–guest OS interactions in OS/2. The additions are not provided with the current version of VirtualPC, but the version last included with a release may still be used with current releases. At one point, OS/2 was a supported host for VirtualPC in addition to a guest. Note that OS/2 runs only as a guest on those versions of VirtualPC that use virtualization (x86 based hosts) and not those doing full emulation (VirtualPC for Mac).
VirtualBox from Oracle Corporation (originally InnoTek, later Sun) supports OS/2 1.x, Warp 3 through 4.5, and eComStation as well as "Other OS/2" as guests. However, attempting to run OS/2 and eComStation can still be difficult, if not impossible, because of the strict requirements of VT-x/AMD-V hardware-enabled virtualization and only ACP2/MCP2 is reported to work in a reliable manner.
ArcaOS supports being run as a virtual machine guest inside VirtualBox, VMware ESXi and VMWare Workstation. It ships with VirtualBox Guest Additions, and driver improvements to improve performance as a guest operating system.
The difficulties in efficiently running OS/2 have, at least once, created an opportunity for a new virtualization company. A large bank in Moscow needed a way to use OS/2 on newer hardware that OS/2 did not support. As virtualization software is an easy way around this, the company desired to run OS/2 under a hypervisor. Once it was determined that VMware was not a possibility, it hired a group of Russian software developers to write a host-based hypervisor that would officially support OS/2. Thus, the Parallels, Inc. company and their Parallels Workstation product was born.
Security niche
OS/2 has few native computer viruses; while it is not invulnerable by design, its reduced market share appears to have discouraged virus writers. There are, however, OS/2-based antivirus programs, dealing with DOS viruses and Windows viruses that could pass through an OS/2 server.
Problems
Some problems were classic subjects of comparison with other operating systems:
Synchronous input queue (SIQ): if a GUI application was not servicing its window messages, the entire GUI system could get stuck and a reboot was required. This problem was considerably reduced with later Warp 3 fixpacks and refined by Warp 4, by taking control over the application after it had not responded for several seconds.
No unified object handles (OS/2 v2.11 and earlier): The availability of threads probably led system designers to overlook mechanisms which allow a single thread to wait for different types of asynchronous events at the same time, for example the keyboard and the mouse in a "console" program. Even though select was added later, it only worked on network sockets. In case of a console program, dedicating a separate thread for waiting on each source of events made it difficult to properly release all the input devices before starting other programs in the same "session". As a result, console programs usually polled the keyboard and the mouse alternately, which resulted in wasted CPU and a characteristic "jerky" reactivity to user input. In OS/2 3.0 IBM introduced a new call for this specific problem.
Historical uses
OS/2 has been widely used by Iran Export Bank (Bank Saderat Iran) in their teller machines, ATMs and local servers (over 35,000 working stations). As of 2011, the bank moved to virtualize and renew their infrastructure by moving OS/2 to Virtual Machines running over Windows.
OS/2 was widely used by Brazilian banks. Banco do Brasil had a peak 10,000 machines running OS/2 Warp in the 1990s. OS/2 was used in automated teller machines until 2006. The workstations and automated teller machines and attendant computers have been migrated to Linux.
OS/2 has been used in the banking industry. Suncorp bank in Australia still ran its ATM network on OS/2 as late as 2002. ATMs at Perisher Blue used OS/2 as late as 2009, and even the turn of the decade.
OS/2 was widely adopted by accounting professionals and auditing companies. In mid-1990s native 32-bit accounting software were well developed and serving corporate markets.
OS/2 ran the faulty baggage handling system at Denver International Airport. The OS was eventually scrapped, but the software written for the system led to massive delays in the opening of the new airport. The OS itself was not at fault, but the software written to run on the OS was. The baggage handling system was eventually removed.
OS/2 was used by radio personality Howard Stern. He once had a 10-minute on-air rant about OS/2 versus Windows 95 and recommended OS/2. He also used OS/2 on his IBM 760CD laptop.
OS/2 was used as part of the Satellite Operations Support System (SOSS) for NPR's Public Radio Satellite System. SOSS was a computer-controlled system using OS/2 that NPR member stations used to receive programming feeds via satellite. SOSS was introduced in 1994 using OS/2 3.0, and was retired in 2007, when NPR switched over to its successor, the ContentDepot.
OS/2 was used to control the SkyTrain automated light rail system in Vancouver, Canada until the late 2000s when it was replaced by Windows XP.
OS/2 was used in the London Underground Jubilee Line Extension Signals Control System (JLESCS) in London, England. This control system delivered by Alcatel was in use from 1999 to 2011 i.e. between abandonment before opening of the line's unimplemented original automatic train control system and the present SelTrac system. JLESCS did not provide automatic train operation only manual train supervision. Six OS/2 local site computers were distributed along the railway between Stratford and Westminster, the shunting tower at Stratford Market Depot, and several formed the central equipment located at Neasden Depot. It was once intended to cover the rest of the line between Green Park and Stanmore but this was never introduced.
OS/2 has been used by The Co-operative Bank in the UK for its domestic call centre staff, using a bespoke program created to access customer accounts which cannot easily be migrated to Windows.
OS/2 has been used by the Stop & Shop supermarket chain (and has been installed in new stores as recently as March 2010).
OS/2 has been used on ticket machines for Tramlink in outer-London.
OS/2 has been used in New York City's subway system for MetroCards.
Rather than interfacing with the user, it connects simple computers and the mainframes.
When NYC MTA finishes its transition to contactless payment, OS/2 will be removed.
OS/2 was used in checkout systems at Safeway supermarkets.
OS/2 was used by Trenitalia, both for the desktops at Ticket Counters and for the Automatic Ticket Counters up to 2011. Incidentally, the Automatic Ticket Counters with OS/2 were more reliable than the current ones running a flavor of Windows.
OS/2 was used as the main operating system for Abbey National General Insurance motor and home direct call centre products using the PMSC Series III insurance platform on DB2.2 from 1996 to 2001.
Awards
BYTE in 1989 listed OS/2 as among the "Excellence" winners of the BYTE Awards, stating that it "is today where the Macintosh was in 1984: It's a development platform in search of developers". The magazine predicted that "When it's complete and bug-free, when it can really use the 80386, and when more desktops sport OS/2-capable PCs, OS/2 will—deservedly—supersede DOS. But even as it stands, OS/2 is a milestone product".
In March 1995 OS/2 won seven awards
InfoWorld Product of the Year.
Five Awards at CeBIT.
PC Professional Magazine - Innovation of the Year award.
CHIP Magazine named OS/2 Warp the Operating System of the Year.
DOS International named OS/2 Warp the Operating System of the Year.
1+1 Magazine awarded it with the Software Marketing Quality award.
Industrie Forum awarded it with its Design Excellence.
SPA Best Business Software Award.
IBM products using OS/2
IBM has used OS/2 in a wide variety of hardware products, effectively as a form of embedded operating system.
See also
History of the graphical user interface
Multiple Virtual DOS Machine (MVDM) – OS/2 virtual DOS machine and seamless Windows integration
Team OS/2
Windows Libraries for OS/2
LAN Manager
References
Further reading
—Necasek discusses an aborted port to PowerPC machines.
External links
os2world.com – Community of OS/2 users
ecomstation.ru – Community of eComStation and OS/2 users
netlabs.org – OpenSource Software for OS/2 and eCS
OS/2 FAQ
hobbes.nmsu.edu – The OS/2 software repository
EDM/2 – The source for OS/2 developers
eCSoft/2 – The OS/2 and eComstation software guide
osFree an open source project to build an OS/2 clone operating system
Voyager Project, a defunct project to reimplement OS/2 on modern technology
OS/2 to Linux API porting project
Open Source OS/2 API implementation for Windows
Microsoft documentation of OS/2 API compatibility with Windows NT
The History of OS/2
Technical details of OS/2
OS/2 Warp 4 Installation and Update Manual; with boot disks and many links
1987 software
Discontinued operating systems
IBM operating systems
Legacy systems
X86 operating systems | OS/2 | Technology | 8,428 |
9,977,718 | https://en.wikipedia.org/wiki/CCL9 | Chemokine (C-C motif) ligand 9 (CCL9) is a small cytokine belonging to the CC chemokine family. It is also called macrophage inflammatory protein-1 gamma (MIP-1γ), macrophage inflammatory protein-related protein-2 (MRP-2) and CCF18, that has been described in rodents. CCL9 has also been previously designated CCL10, although this name is no longer in use. It is secreted by follicle-associated epithelium (FAE) such as that found around Peyer's patches, and attracts dendritic cells that possess the cell surface molecule CD11b and the chemokine receptor CCR1. CCL9 can activate osteoclasts through its receptor CCR1 (the most abundant chemokine receptor found on osteoclasts) suggesting an important role for CCL9 in bone resorption. CCL9 is constitutively expressed in macrophages and myeloid cells. The gene for CCL9 is located on chromosome 11 in mice.
CCL9 is a chemokine involved in the process of signaling an antileukemic response and is a potential form of immunotherapy for chronic myelogenous leukemia (CML). CML is a type of cancer in which the bone marrow produces too many red blood cells. This is caused by chromosomal translocation, a mutation in which the abnormal gene BCR-ABL, is turned into a CML cell. CML starts off as a myeloproliferative for example in sickle cell anemia or extreme granulocytosis but if left untreated, it could transform into an acute form of leukemia. In order to treat CML, alpha and beta interferons (INFs) are used to regulate the process of binding the protein ICSBP to the gene BCR-ABL. CCL9 was proved to be a gene induced by ICSBP and IFN alpha and also a requirement in the expression of ICSBP in BCR-ABL transformed cells to generate an anti-leukemic immune protection via experimentation. CCL6 and CCL9 were overexpressed in BaF3 cells and injected with BCR-ABL into syngeneic mice. Although the mice still developed leukemia, it delayed the advancement of the disease by several weeks proving that CCL6 and CCL9 contribute to the creation of an anti-leukemic response within infected cells.
References
Cytokines | CCL9 | Chemistry | 541 |
49,044,066 | https://en.wikipedia.org/wiki/Phlebia%20radiata | Phlebia radiata, commonly known as the wrinkled crust, is a common species of crust fungus in the family Meruliaceae. It is widespread in the Northern Hemisphere. It grows as a wrinkled, orange to pinkish waxy crust on the decaying wood of coniferous and deciduous trees, in which it causes a white rot. The fungus was first described scientifically in 1821 by Elias Magnus Fries.
Description
The fruitbody of Phlebia radiata is resupinate—flattened against its substrate like a crust. It is wrinkled, orange to pinkish in color, and has a waxy texture. It is circular to irregular in shape, reaching a diameter up to , although neighbouring fruitbodies may be fused together to form larger complexes up to in diameter. The soft texture of the flesh hardens when the fruitbody becomes old. The fungus is inedible.
In mass, the spores are white. Microscopic examination reveals additional spore details: they are smooth, allantoid (sausage-shaped) to elliptical, and inamyloid, measuring 3.5–7 by 1–3 μm.
Similar species include Botryobasidium vagum, Meruliporia incrassata, Piloderma bicolor, and Serpula lacrymans.
Habitat and distribution
Phlebia radiata is a saprophytic species, and causes a white rot in the wood it colonizes, fallen logs and branches of both coniferous and hardwood trees.
References
Fungi described in 1821
Fungi of Asia
Fungi of Europe
Fungi of North America
Inedible fungi
Meruliaceae
Taxa named by Elias Magnus Fries
Fungus species | Phlebia radiata | Biology | 336 |
69,514,974 | https://en.wikipedia.org/wiki/Mesityl%20bromide | Mesityl bromide is an organic compound with the formula (CH3)3C6H2Br. It is a derivative of mesitylene (1,3,5-trimethylbenzene) with one ring H replaced by Br. The compound is a colorless oil. It is a standard electron-rich aryl halide substrate for cross coupling reactions. With magnesium it reacts to give the Grignard reagent, which is used in the preparation of tetramesityldiiron.
It is prepared by the direct reaction of bromine with mesitylene:
(CH3)3C6H3 + Br2 → (CH3)3C6H2Br + HBr
References
Bromoarenes
Phenyl compounds
Alkylating agents | Mesityl bromide | Chemistry | 163 |
12,291,673 | https://en.wikipedia.org/wiki/Current%20density%20imaging | Current density imaging (CDI) is an extension of magnetic resonance imaging (MRI), developed at the University of Toronto. It employs two techniques for spatially mapping electric current pathways through tissue:
LF-CDI, low-frequency CDI, the original implementation developed at the University of Toronto. In this technique, low frequency (LF) electric currents are injected into the tissue. These currents generate magnetic fields, which are then measured using MRI techniques. The current pathways are then computed and spatially mapped.
RF-CDI, radio frequency CDI, a rotating frame of reference version of LF-CDI. This allows measurement of a single component of current density, without requiring subject rotation. The high frequency current that is injected into tissue also does not cause the muscle twitching often encountered using LF-CDI, allowing in-vivo measurements on human subjects.
See also
Magnetic resonance imaging
References
External links
Current Density Imaging page at the University of Toronto
Magnetic resonance imaging
Medical imaging | Current density imaging | Chemistry | 201 |
47,405,502 | https://en.wikipedia.org/wiki/U%20Vulpeculae | U Vulpeculae is a variable and binary star in the constellation Vulpecula.
It is a classical Cepheid variable and its apparent magnitude ranges from 6.73 to 7.54 over a precise cycle of 7.99 days. Its variable nature was discovered in 1898 at Potsdam Observatory by Gustav Müller and Paul Kempf.
In 1991 a study of radial velocities showed that it U Vulpeculae is a spectroscopic binary and a full orbit with a period of 2510 days (6.9 years) was first calculated in 1996. The secondary star is invisible and is only known from its effect on the motion of the primary.
References
Vulpecula
Classical Cepheid variables
Vulpeculae, U
F-type supergiants
G-type supergiants
Durchmusterung objects
7458
185059
096458
Spectroscopic binaries | U Vulpeculae | Astronomy | 187 |
61,091,295 | https://en.wikipedia.org/wiki/Baer%20function | Baer functions and , named after Karl Baer, are solutions of the Baer differential equation
which arises when separation of variables is applied to the Laplace equation in paraboloidal coordinates. The Baer functions are defined as the series solutions about which satisfy , . By substituting a power series Ansatz into the differential equation, formal series can be constructed for the Baer functions. For special values of and , simpler solutions may exist. For instance,
Moreover, Mathieu functions are special-case solutions of the Baer equation, since the latter reduces to the Mathieu differential equation when and , and making the change of variable .
Like the Mathieu differential equation, the Baer equation has two regular singular points (at and ), and one irregular singular point at infinity. Thus, in contrast with many other special functions of mathematical physics, Baer functions cannot in general be expressed in terms of hypergeometric functions.
The Baer wave equation is a generalization which results from separating variables in the Helmholtz equation in paraboloidal coordinates:
which reduces to the original Baer equation when .
References
Bibliography
(free online access to the appendix on Baer functions)
External links
Ordinary differential equations
Special functions | Baer function | Mathematics | 247 |
16,977,667 | https://en.wikipedia.org/wiki/Clockkeeper | A clockkeeper, sometimes seen as clock keeper, refers to a form of employment seen prevalently during Middle Age Europe involving the tracking of time and the maintaining of clocks and other timekeeping devices. However, the practice and its appearance throughout history fluctuated in centuries following the Middle Ages, and the necessity of an attendant to clockkeep remained long after the invention of the mechanical clock.
The clockkeeper was often paid considerable amounts of money to ensure the accuracy of a given clock or clocks, as the practice involved at least basic skills in mathematics and numbers in a time when education had not yet made a widespread appearance. Clockkeepers were also expected to keep the clocks in good working order, and that they did not malfunction. The prominence of the position throughout the period varies, and the contexts in which it was applied also varies; often, the lord of a settlement or manor would employ a clockkeeper, and, just as often, an entire town may have had a designated person to regulate the town clock. They were also seen in use in cathedrals and monasteries. Time keeping in the latter case was important considering the various schedules of the Church, the complexity of its day, and the various daily, weekly, monthly and annual rituals it took part in. Recorded instances of such holy places requiring a clockkeeper during the late 13th and early 14th centuries included, among others:
St Paul's Cathedral
Palace of Westminster
Cambrai Cathedral
The dimensions of clockkeeping were also interchangeable. During the period, most clocks needed rewinding at least twice a day, but, depending on the specifics of the employment, this may have differed. Other details included whether the clockkeeper was permanent, or periodic—some clockkeepers were merely employed to check and ensure the good order of the clock, rather than to constantly monitor the time.
In modern times, a clockkeeper is the official in charge of clocking playing time in certain sporting events.
References
External links
William R. Kennedy, "A message to those in care of historic tower clocks"
Clocks
Medieval occupations | Clockkeeper | Physics,Technology,Engineering | 415 |
1,599,873 | https://en.wikipedia.org/wiki/Atrophic%20gastritis | Atrophic gastritis is a process of chronic inflammation of the gastric mucosa of the stomach, leading to a loss of gastric glandular cells and their eventual replacement by intestinal and fibrous tissues. As a result, the stomach's secretion of essential substances such as hydrochloric acid, pepsin, and intrinsic factor is impaired, leading to digestive problems. The most common are pernicious anemia possibly leading to vitamin B12 deficiency; and malabsorption of iron, leading to iron deficiency anaemia. It can be caused by persistent infection with Helicobacter pylori, or can be autoimmune in origin. Those with autoimmune atrophic gastritis (Type A gastritis) are statistically more likely to develop gastric carcinoma (a form of stomach cancer), Hashimoto's thyroiditis, and achlorhydria.
Type A gastritis primarily affects the fundus (body) of the stomach and is more common with pernicious anemia. Type B gastritis primarily affects the antrum, and is more common with H. pylori infection.
Signs and symptoms
Some people with atrophic gastritis may be asymptomatic. Symptomatic patients are mostly females and signs of atrophic gastritis are those associated with iron deficiency: fatigue, restless legs syndrome, brittle nails, hair loss, impaired immune function, and impaired wound healing. And other symptoms, such as delayed gastric emptying (80%), reflux symptoms (25%), peripheral neuropathy (25% of cases), autonomic abnormalities, and memory loss, are less common and occur in 1%–2% of cases. Psychiatric disorders are also reported, such as mania, depression, obsessive-compulsive disorder, psychosis, and cognitive impairment.
Although autoimmune atrophic gastritis impairs iron and vitamin B12 absorption, iron deficiency is detected at a younger age than pernicious anemia.
Associated conditions
People with atrophic gastritis are also at increased risk for the development of gastric adenocarcinoma.
Causes
Recent research has shown that autoimmune metaplastic atrophic gastritis (AMAG) is a result of the immune system attacking the parietal cells.
Environmental metaplastic atrophic gastritis (EMAG) is due to environmental factors, such as diet and H. pylori infection. EMAG is typically confined to the body of the stomach. Patients with EMAG are also at increased risk of gastric carcinoma.
Pathophysiology
Autoimmune metaplastic atrophic gastritis (AMAG) is an inherited form of atrophic gastritis characterized by an immune response directed toward parietal cells and intrinsic factor.
Achlorhydria induces G cell (gastrin-producing) hyperplasia, which leads to hypergastrinemia. Gastrin exerts a trophic effect on enterochromaffin-like cells (ECL cells are responsible for histamine secretion) and is hypothesized to be one mechanism to explain the malignant transformation of ECL cells into carcinoid tumors in AMAG.
Diagnosis
Detection of APCA (anti-parietal cell antibodies), anti-intrinsic factor antibodies (AIFA), and Helicobacter pylori (HP) antibodies in conjunction with serum gastrin are effective for diagnostic purposes.
Classification
The notion that atrophic gastritis could be classified depending on the level of progress as "closed type" or "open type" was suggested in early studies, but no universally accepted classification exists as of 2017.
Treatment
Supplementation of folic acid in deficient patients can improve the histopathological findings of chronic atrophic gastritis and reduce the incidence of gastric cancer.
See also
Chronic gastritis
References
External links
Aging-associated diseases
Autoimmune diseases
Stomach disorders | Atrophic gastritis | Biology | 852 |
36,193,702 | https://en.wikipedia.org/wiki/De%20Beghinselen%20Der%20Weeghconst | De Beghinselen der Weeghconst ( "The Principles of the Art of Weighing") is a book about statics written by the Flemish physicist Simon Stevin in Dutch. It was published in 1586 in a single volume with De Weeghdaet ( "The Act of Weighing"), De Beghinselen des Waterwichts ("The Principles of Hydrostatics") and an Anhang (an appendix). In 1605, there was another edition.
Importance
The importance of the book was summarized by the Encyclopædia Britannica:
Contents
The first part consists of two books, together account for 95 pages, here divided into 10 pieces.
Book I
Start: panegyrics, Mission to Rudolf II, Uytspraeck Vande Weerdicheyt of Duytsche Tael, Cortbegryp
Bepalinghen and Begheerten (definitions and assumptions)
Proposal 1 t / m 4: hefboomwet
Proposal 5 t / m 12: a balance with weights pilaer
Proposition 13 t / m 18: follow-up, with hefwicht, two supports
Proposition 19: balance on an inclined plane, with cloot Crans
Proposal 20 t / m 28: pilaer with scheefwichten, hanging, body
Book II
Proposal 1 t / m 6: center of gravity boards – triangle, rectilinear flat
Proposal 7 t / m 13: trapezium, divide, cut fire
Proposition 14 t / m 24: center of gravity of bodies – pillar, pyramid, burner
The Weeghdaet
The Beghinselen des Waterwichts
Anhang
Byvough
See also
Simon Stevin
References
Further reading
1586 books
Mathematics books
Physics books
Statics | De Beghinselen Der Weeghconst | Physics | 363 |
143,320 | https://en.wikipedia.org/wiki/PCI%20Express | PCI Express (Peripheral Component Interconnect Express), officially abbreviated as PCIe or PCI-E, is a high-speed serial computer expansion bus standard, meant to replace the older PCI, PCI-X and AGP bus standards. It is the common motherboard interface for personal computers' graphics cards, capture cards, sound cards, hard disk drive host adapters, SSDs, Wi-Fi, and Ethernet hardware connections. PCIe has numerous improvements over the older standards, including higher maximum system bus throughput, lower I/O pin count, smaller physical footprint, better performance scaling for bus devices, a more detailed error detection and reporting mechanism (Advanced Error Reporting, AER), and native hot-swap functionality. More recent revisions of the PCIe standard provide hardware support for I/O virtualization.
The PCI Express electrical interface is measured by the number of simultaneous lanes. (A lane is a single send/receive line of data, analogous to a "one-lane road" having one lane of traffic in both directions.) The interface is also used in a variety of other standards — most notably the laptop expansion card interface called ExpressCard. It is also used in the storage interfaces of SATA Express, U.2 (SFF-8639) and M.2.
Formal specifications are maintained and developed by the PCI-SIG (PCI Special Interest Group) — a group of more than 900 companies that also maintains the conventional PCI specifications.
Architecture
Conceptually, the PCI Express bus is a high-speed serial replacement of the older PCI/PCI-X bus. One of the key differences between the PCI Express bus and the older PCI is the bus topology; PCI uses a shared parallel bus architecture, in which the PCI host and all devices share a common set of address, data, and control lines. In contrast, PCI Express is based on point-to-point topology, with separate serial links connecting every device to the root complex (host). Because of its shared bus topology, access to the older PCI bus is arbitrated (in the case of multiple masters), and limited to one master at a time, in a single direction. Furthermore, the older PCI clocking scheme limits the bus clock to the slowest peripheral on the bus (regardless of the devices involved in the bus transaction). In contrast, a PCI Express bus link supports full-duplex communication between any two endpoints, with no inherent limitation on concurrent access across multiple endpoints.
In terms of bus protocol, PCI Express communication is encapsulated in packets. The work of packetizing and de-packetizing data and status-message traffic is handled by the transaction layer of the PCI Express port (described later). Radical differences in electrical signaling and bus protocol require the use of a different mechanical form factor and expansion connectors (and thus, new motherboards and new adapter boards); PCI slots and PCI Express slots are not interchangeable. At the software level, PCI Express preserves backward compatibility with PCI; legacy PCI system software can detect and configure newer PCI Express devices without explicit support for the PCI Express standard, though new PCI Express features are inaccessible.
The PCI Express link between two devices can vary in size from one to 16 lanes. In a multi-lane link, the packet data is striped across lanes, and peak data throughput scales with the overall link width. The lane count is automatically negotiated during device initialization and can be restricted by either endpoint. For example, a single-lane PCI Express (x1) card can be inserted into a multi-lane slot (x4, x8, etc.), and the initialization cycle auto-negotiates the highest mutually supported lane count. The link can dynamically down-configure itself to use fewer lanes, providing a failure tolerance in case bad or unreliable lanes are present. The PCI Express standard defines link widths of x1, x2, x4, x8, and x16. Up to and including PCIe 5.0, x12, and x32 links were defined as well but virtually never used. This allows the PCI Express bus to serve both cost-sensitive applications where high throughput is not needed, and performance-critical applications such as 3D graphics, networking (10 Gigabit Ethernet or multiport Gigabit Ethernet), and enterprise storage (SAS or Fibre Channel). Slots and connectors are only defined for a subset of these widths, with link widths in between using the next larger physical slot size.
As a point of reference, a PCI-X (133 MHz 64-bit) device and a PCI Express 1.0 device using four lanes (x4) have roughly the same peak single-direction transfer rate of 1064 MB/s. The PCI Express bus has the potential to perform better than the PCI-X bus in cases where multiple devices are transferring data simultaneously, or if communication with the PCI Express peripheral is bidirectional.
Interconnect
PCI Express devices communicate via a logical connection called an interconnect or link. A link is a point-to-point communication channel between two PCI Express ports allowing both of them to send and receive ordinary PCI requests (configuration, I/O or memory read/write) and interrupts (INTx, MSI or MSI-X). At the physical level, a link is composed of one or more lanes. Low-speed peripherals (such as an 802.11 Wi-Fi card) use a single-lane (x1) link, while a graphics adapter typically uses a much wider and therefore faster 16-lane (x16) link.
Lane
A lane is composed of two differential signaling pairs, with one pair for receiving data and the other for transmitting. Thus, each lane is composed of four wires or signal traces. Conceptually, each lane is used as a full-duplex byte stream, transporting data packets in eight-bit "byte" format simultaneously in both directions between endpoints of a link. Physical PCI Express links may contain 1, 4, 8 or 16 lanes. Lane counts are written with an "x" prefix (for example, "x8" represents an eight-lane card or slot), with x16 being the largest size in common use. Lane sizes are also referred to via the terms "width" or "by" e.g., an eight-lane slot could be referred to as a "by 8" or as "8 lanes wide."
For mechanical card sizes, see below.
Serial bus
The bonded serial bus architecture was chosen over the traditional parallel bus because of the inherent limitations of the latter, including half-duplex operation, excess signal count, and inherently lower bandwidth due to timing skew. Timing skew results from separate electrical signals within a parallel interface traveling through conductors of different lengths, on potentially different printed circuit board (PCB) layers, and at possibly different signal velocities. Despite being transmitted simultaneously as a single word, signals on a parallel interface have different travel duration and arrive at their destinations at different times. When the interface clock period is shorter than the largest time difference between signal arrivals, recovery of the transmitted word is no longer possible. Since timing skew over a parallel bus can amount to a few nanoseconds, the resulting bandwidth limitation is in the range of hundreds of megahertz.
A serial interface does not exhibit timing skew because there is only one differential signal in each direction within each lane, and there is no external clock signal since clocking information is embedded within the serial signal itself. As such, typical bandwidth limitations on serial signals are in the multi-gigahertz range. PCI Express is one example of the general trend toward replacing parallel buses with serial interconnects; other examples include Serial ATA (SATA), USB, Serial Attached SCSI (SAS), FireWire (IEEE 1394), and RapidIO. In digital video, examples in common use are DVI, HDMI, and DisplayPort.
Multichannel serial design increases flexibility with its ability to allocate fewer lanes for slower devices.
Form factors
PCI Express (standard)
A PCI Express card fits into a slot of its physical size or larger (with x16 as the largest used), but may not fit into a smaller PCI Express slot; for example, a x16 card may not fit into a x4 or x8 slot. Some slots use open-ended sockets to permit physically longer cards and negotiate the best available electrical and logical connection.
The number of lanes actually connected to a slot may also be fewer than the number supported by the physical slot size. An example is a x16 slot that runs at x4, which accepts any x1, x2, x4, x8 or x16 card, but provides only four lanes. Its specification may read as "x16 (x4 mode)", while "mechanical @ electrical" notation (e.g. "x16 @ x4") is also common. The advantage is that such slots can accommodate a larger range of PCI Express cards without requiring motherboard hardware to support the full transfer rate. Standard mechanical sizes are x1, x4, x8, and x16. Cards using a number of lanes other than the standard mechanical sizes need to physically fit the next larger mechanical size (e.g. an x2 card uses the x4 size, or an x12 card uses the x16 size).
The cards themselves are designed and manufactured in various sizes. For example, solid-state drives (SSDs) that come in the form of PCI Express cards often use HHHL (half height, half length) and FHHL (full height, half length) to describe the physical dimensions of the card.
Non-standard video card form factors
Modern (since ) gaming video cards usually exceed the height as well as thickness specified in the PCI Express standard, due to the need for more capable and quieter cooling fans, as gaming video cards often emit hundreds of watts of heat. Modern computer cases are often wider to accommodate these taller cards, but not always. Since full-length cards (312 mm) are uncommon, modern cases sometimes cannot fit those. The thickness of these cards also typically occupies the space of 2 to 5 PCIe slots. In fact, even the methodology of how to measure the cards varies between vendors, with some including the metal bracket size in dimensions and others not.
For instance, comparing three high-end video cards released in 2020: a Sapphire Radeon RX 5700 XT card measures 135 mm in height (excluding the metal bracket), which exceeds the PCIe standard height by 28 mm, another Radeon RX 5700 XT card by XFX measures 55 mm thick (i.e. 2.7 PCI slots at 20.32 mm), taking up 3 PCIe slots, while an Asus GeForce RTX 3080 video card takes up two slots and measures 140.1mm × 318.5mm × 57.8mm, exceeding PCI Express's maximum height, length, and thickness respectively.
Pinout
The following table identifies the conductors on each side of the edge connector on a PCI Express card. The solder side of the printed circuit board (PCB) is the A-side, and the component side is the B-side. PRSNT1# and PRSNT2# pins must be slightly shorter than the rest, to ensure that a hot-plugged card is fully inserted. The WAKE# pin uses full voltage to wake the computer, but must be pulled high from the standby power to indicate that the card is wake capable.
Power
Slot power
All PCI express cards may consume up to at (). The amount of +12 V and total power they may consume depends on the form factor and the role of the card:
x1 cards are limited to 0.5 A at +12V (6 W) and 10 W combined.
x4 and wider cards are limited to 2.1 A at +12V (25 W) and 25 W combined.
A full-sized x1 card may draw up to the 25 W limits after initialization and software configuration as a high-power device.
A full-sized x16 graphics card may draw up to 5.5 A at +12V (66 W) and 75 W combined after initialization and software configuration as a high-power device.
6- and 8-pin power connectors
Optional connectors add (6-pin) or (8-pin) of +12 V power for up to total ().
Sense0 pin is connected to ground by the cable or power supply, or float on board if cable is not connected.
Sense1 pin is connected to ground by the cable or power supply, or float on board if cable is not connected.
Some cards use two 8-pin connectors, but this has not been standardized yet , therefore such cards must not carry the official PCI Express logo. This configuration allows 375 W total () and will likely be standardized by PCI-SIG with the PCI Express 4.0 standard. The 8-pin PCI Express connector could be confused with the EPS12V connector, which is mainly used for powering SMP and multi-core systems. The power connectors are variants of the Molex Mini-Fit Jr. series connectors.
12VHPWR connector
PCI Express Mini Card
PCI Express Mini Card (also known as Mini PCI Express, Mini PCIe, Mini PCI-E, mPCIe, and PEM), based on PCI Express, is a replacement for the Mini PCI form factor. It is developed by the PCI-SIG. The host device supports both PCI Express and USB 2.0 connectivity, and each card may use either standard. Most laptop computers built after 2005 use PCI Express for expansion cards; however, , many vendors are moving toward using the newer M.2 form factor for this purpose.
Due to different dimensions, PCI Express Mini Cards are not physically compatible with standard full-size PCI Express slots; however, passive adapters exist that let them be used in full-size slots.
Physical dimensions
Dimensions of PCI Express Mini Cards are 30 mm × 50.95 mm (width × length) for a Full Mini Card. There is a 52-pin edge connector, consisting of two staggered rows on a 0.8 mm pitch. Each row has eight contacts, a gap equivalent to four contacts, then a further 18 contacts. Boards have a thickness of 1.0 mm, excluding the components. A "Half Mini Card" (sometimes abbreviated as HMC) is also specified, having approximately half the physical length of 26.8 mm. There are also half size mini PCIe cards that are 30 x 31.90 mm which is about half the length of a full size mini PCIe card.
Electrical interface
PCI Express Mini Card edge connectors provide multiple connections and buses:
PCI Express x1 (with SMBus)
USB 2.0
Wires to diagnostics LEDs for wireless network (i.e., Wi-Fi) status on computer's chassis
SIM card for GSM and WCDMA applications (UIM signals on spec.)
Future extension for another PCIe lane
1.5 V and 3.3 V power
Mini-SATA (mSATA) variant
Despite sharing the Mini PCI Express form factor, an mSATA slot is not necessarily electrically compatible with Mini PCI Express. For this reason, only certain notebooks are compatible with mSATA drives. Most compatible systems are based on Intel's Sandy Bridge processor architecture, using the Huron River platform. Notebooks such as Lenovo's ThinkPad T, W and X series, released in March–April 2011, have support for an mSATA SSD card in their WWAN card slot. The ThinkPad Edge E220s/E420s, and the Lenovo IdeaPad Y460/Y560/Y570/Y580 also support mSATA. On the contrary, the L-series among others can only support M.2 cards using the PCIe standard in the WWAN slot.
Some notebooks (notably the Asus Eee PC, the Apple MacBook Air, and the Dell mini9 and mini10) use a variant of the PCI Express Mini Card as an SSD. This variant uses the reserved and several non-reserved pins to implement SATA and IDE interface passthrough, keeping only USB, ground lines, and sometimes the core PCIe x1 bus intact. This makes the "miniPCIe" flash and solid-state drives sold for netbooks largely incompatible with true PCI Express Mini implementations.
Also, the typical Asus miniPCIe SSD is 71 mm long, causing the Dell 51 mm model to often be (incorrectly) referred to as half length. A true 51 mm Mini PCIe SSD was announced in 2009, with two stacked PCB layers that allow for higher storage capacity. The announced design preserves the PCIe interface, making it compatible with the standard mini PCIe slot. No working product has yet been developed.
Intel has numerous desktop boards with the PCIe x1 Mini-Card slot that typically do not support mSATA SSD. A list of desktop boards that natively support mSATA in the PCIe x1 Mini-Card slot (typically multiplexed with a SATA port) is provided on the Intel Support site.
PCI Express M.2
M.2 replaces the mSATA standard and Mini PCIe. Computer bus interfaces provided through the M.2 connector are PCI Express 3.0 (up to four lanes), Serial ATA 3.0, and USB 3.0 (a single logical port for each of the latter two). It is up to the manufacturer of the M.2 host or device to choose which interfaces to support, depending on the desired level of host support and device type.
PCI Express External Cabling
PCI Express External Cabling (also known as External PCI Express, Cabled PCI Express, or ePCIe) specifications were released by the PCI-SIG in February 2007.
Standard cables and connectors have been defined for x1, x4, x8, and x16 link widths, with a transfer rate of 250 MB/s per lane. The PCI-SIG also expects the norm to evolve to reach 500 MB/s, as in PCI Express 2.0. An example of the uses of Cabled PCI Express is a metal enclosure, containing a number of PCIe slots and PCIe-to-ePCIe adapter circuitry. This device would not be possible had it not been for the ePCIe specification.
PCI Express OCuLink
OCuLink (standing for "optical-copper link", since Cu is the chemical symbol for copper) is an extension for the "cable version of PCI Express". Version 1.0 of OCuLink, released in Oct 2015, supports up to 4 PCIe 3.0 lanes (3.9 GB/s) over copper cabling; a fiber optic version may appear in the future.
The most recent version of OCuLink, OCuLink-2, supports up to 16 GB/s (PCIe 4.0 x8) while the maximum bandwidth of a USB 4 cable is 10GB/s.
While initially intended for use in laptops for the connection of powerful external GPU boxes, OCuLink's popularity lies primarily in its use for PCIe interconnections in servers, a more prevalent application.
Derivative forms
Numerous other form factors use, or are able to use, PCIe. These include:
Low-height card
ExpressCard: Successor to the PC Card form factor (with x1 PCIe and USB 2.0; hot-pluggable)
PCI Express ExpressModule: A hot-pluggable modular form factor defined for servers and workstations
XQD card: A PCI Express-based flash card standard by the CompactFlash Association with x2 PCIe
CFexpress card: A PCI Express-based flash card by the CompactFlash Association in three form factors supporting 1 to 4 PCIe lanes
SD card: The SD Express bus, introduced in version 7.0 of the SD specification uses a x1 PCIe link
XMC: Similar to the CMC/PMC form factor (VITA 42.3)
AdvancedTCA: A complement to CompactPCI for larger applications; supports serial based backplane topologies
AMC: A complement to the AdvancedTCA specification; supports processor and I/O modules on ATCA boards (x1, x2, x4 or x8 PCIe).
FeaturePak: A tiny expansion card format (43mm × 65 mm) for embedded and small-form-factor applications, which implements two x1 PCIe links on a high-density connector along with USB, I2C, and up to 100 points of I/O
Universal IO: A variant from Super Micro Computer Inc designed for use in low-profile rack-mounted chassis. It has the connector bracket reversed so it cannot fit in a normal PCI Express socket, but it is pin-compatible and may be inserted if the bracket is removed.
M.2 (formerly known as NGFF)
M-PCIe brings PCIe 3.0 to mobile devices (such as tablets and smartphones), over the M-PHY physical layer.
U.2 (formerly known as SFF-8639)
SlimSAS
The PCIe slot connector can also carry protocols other than PCIe. Some 9xx series Intel chipsets support Serial Digital Video Out, a proprietary technology that uses a slot to transmit video signals from the host CPU's integrated graphics instead of PCIe, using a supported add-in.
The PCIe transaction-layer protocol can also be used over some other interconnects, which are not electrically PCIe:
Thunderbolt: A royalty-free interconnect standard by Intel that combines DisplayPort and PCIe protocols in a form factor compatible with Mini DisplayPort. Thunderbolt 3.0 also combines USB 3.1 and uses the USB-C form factor as opposed to Mini DisplayPort.
USB4
History and revisions
While in early development, PCIe was initially referred to as HSI (for High Speed Interconnect), and underwent a name change to 3GIO (for 3rd Generation I/O) before finally settling on its PCI-SIG name PCI Express. A technical working group named the Arapaho Work Group (AWG) drew up the standard. For initial drafts, the AWG consisted only of Intel engineers; subsequently, the AWG expanded to include industry partners.
Since, PCIe has undergone several large and smaller revisions, improving on performance and other features.
Comparison table
Notes
PCI Express 1.0a
In 2003, PCI-SIG introduced PCIe 1.0a, with a per-lane data rate of 250 MB/s and a transfer rate of 2.5 gigatransfers per second (GT/s).
Transfer rate is expressed in transfers per second instead of bits per second because the number of transfers includes the overhead bits, which do not provide additional throughput; PCIe 1.x uses an 8b/10b encoding scheme, resulting in a 20% (= 2/10) overhead on the raw channel bandwidth. So in the PCIe terminology, transfer rate refers to the encoded bit rate: 2.5 GT/s is 2.5 Gbit/s on the encoded serial link. This corresponds to 2.0 Gbit/s of pre-coded data or 250 MB/s, which is referred to as throughput in PCIe.
PCI Express 1.1
In 2005, PCI-SIG introduced PCIe 1.1. This updated specification includes clarifications and several improvements, but is fully compatible with PCI Express 1.0a. No changes were made to the data rate.
PCI Express 2.0
PCI-SIG announced the availability of the PCI Express Base 2.0 specification on 15 January 2007. The PCIe 2.0 standard doubles the transfer rate compared with PCIe 1.0 to 5GT/s and the per-lane throughput rises from 250 MB/s to 500 MB/s. Consequently, a 16-lane PCIe connector (x16) can support an aggregate throughput of up to 8 GB/s.
PCIe 2.0 motherboard slots are fully backward compatible with PCIe v1.x cards. PCIe 2.0 cards are also generally backward compatible with PCIe 1.x motherboards, using the available bandwidth of PCI Express 1.1. Overall, graphic cards or motherboards designed for v2.0 work, with the other being v1.1 or v1.0a.
The PCI-SIG also said that PCIe 2.0 features improvements to the point-to-point data transfer protocol and its software architecture.
Intel's first PCIe 2.0 capable chipset was the X38 and boards began to ship from various vendors (Abit, Asus, Gigabyte) as of 21 October 2007. AMD started supporting PCIe 2.0 with its AMD 700 chipset series and nVidia started with the MCP72. All of Intel's prior chipsets, including the Intel P35 chipset, supported PCIe 1.1 or 1.0a.
Like 1.x, PCIe 2.0 uses an 8b/10b encoding scheme, therefore delivering, per-lane, an effective 4 Gbit/s max. transfer rate from its 5 GT/s raw data rate.
PCI Express 2.1
PCI Express 2.1 (with its specification dated 4 March 2009) supports a large proportion of the management, support, and troubleshooting systems planned for full implementation in PCI Express 3.0. However, the speed is the same as PCI Express 2.0. The increase in power from the slot breaks backward compatibility between PCI Express 2.1 cards and some older motherboards with 1.0/1.0a, but most motherboards with PCI Express 1.1 connectors are provided with a BIOS update by their manufacturers through utilities to support backward compatibility of cards with PCIe 2.1.
PCI Express 3.0
PCI Express 3.0 Base specification revision 3.0 was made available in November 2010, after multiple delays. In August 2007, PCI-SIG announced that PCI Express 3.0 would carry a bit rate of 8 gigatransfers per second (GT/s), and that it would be backward compatible with existing PCI Express implementations. At that time, it was also announced that the final specification for PCI Express 3.0 would be delayed until Q2 2010. New features for the PCI Express 3.0 specification included a number of optimizations for enhanced signaling and data integrity, including transmitter and receiver equalization, PLL improvements, clock data recovery, and channel enhancements of currently supported topologies.
Following a six-month technical analysis of the feasibility of scaling the PCI Express interconnect bandwidth, PCI-SIG's analysis found that 8 gigatransfers per second could be manufactured in mainstream silicon process technology, and deployed with existing low-cost materials and infrastructure, while maintaining full compatibility (with negligible impact) with the PCI Express protocol stack.
PCI Express 3.0 upgraded the encoding scheme to 128b/130b from the previous 8b/10b encoding, reducing the bandwidth overhead from 20% of PCI Express 2.0 to approximately 1.54% (= 2/130). PCI Express 3.0's 8 GT/s bit rate effectively delivers 985 MB/s per lane, nearly doubling the lane bandwidth relative to PCI Express 2.0.
On 18 November 2010, the PCI Special Interest Group officially published the finalized PCI Express 3.0 specification to its members to build devices based on this new version of PCI Express.
PCI Express 3.1
In September 2013, PCI Express 3.1 specification was announced for release in late 2013 or early 2014, consolidating various improvements to the published PCI Express 3.0 specification in three areas: power management, performance and functionality. It was released in November 2014.
PCI Express 4.0
On 29 November 2011, PCI-SIG preliminarily announced PCI Express 4.0, providing a 16 GT/s bit rate that doubles the bandwidth provided by PCI Express 3.0 to 31.5 GB/s in each direction for a 16-lane configuration, while maintaining backward and forward compatibility in both software support and used mechanical interface. PCI Express 4.0 specs also bring OCuLink-2, an alternative to Thunderbolt. OCuLink version 2 has up to 16 GT/s (16GB/s total for x8 lanes), while the maximum bandwidth of a Thunderbolt 3 link is 5GB/s.
In June 2016 Cadence, PLDA and Synopsys demonstrated PCIe 4.0 physical-layer, controller, switch and other IP blocks at the PCI SIG’s annual developer’s conference.
Mellanox Technologies announced the first 100Gbit/s network adapter with PCIe 4.0 on 15 June 2016, and the first 200Gbit/s network adapter with PCIe 4.0 on 10 November 2016.
In August 2016, Synopsys presented a test setup with FPGA clocking a lane to PCIe 4.0 speeds at the Intel Developer Forum. Their IP has been licensed to several firms planning to present their chips and products at the end of 2016.
On the IEEE Hot Chips Symposium in August 2016 IBM announced the first CPU with PCIe 4.0 support, POWER9.
PCI-SIG officially announced the release of the final PCI Express 4.0 specification on 8 June 2017. The spec includes improvements in flexibility, scalability, and lower-power.
On 5 December 2017 IBM announced the first system with PCIe 4.0 slots, Power AC922.
NETINT Technologies introduced the first NVMe SSD based on PCIe 4.0 on 17 July 2018, ahead of Flash Memory Summit 2018
AMD announced on 9 January 2019 its upcoming Zen 2-based processors and X570 chipset would support PCIe 4.0. AMD had hoped to enable partial support for older chipsets, but instability caused by motherboard traces not conforming to PCIe 4.0 specifications made that impossible.
Intel released their first mobile CPUs with PCI Express 4.0 support in mid-2020, as a part of the Tiger Lake microarchitecture.
PCI Express 5.0
In June 2017, PCI-SIG announced the PCI Express 5.0 preliminary specification. Bandwidth was expected to increase to 32GT/s, yielding 63GB/s in each direction in a 16-lane configuration. The draft spec was expected to be standardized in 2019. Initially, was also considered for technical feasibility.
On 7 June 2017 at PCI-SIG DevCon, Synopsys recorded the first demonstration of PCI Express 5.0 at 32 GT/s.
On 31 May 2018, PLDA announced the availability of their XpressRICH5 PCIe 5.0 Controller IP based on draft 0.7 of the PCIe 5.0 specification on the same day.
On 10 December 2018, the PCI SIG released version 0.9 of the PCIe 5.0 specification to its members,
and on 17 January 2019, PCI SIG announced the version 0.9 had been ratified, with version 1.0 targeted for release in the first quarter of 2019.
On 29 May 2019, PCI-SIG officially announced the release of the final PCI Express 5.0 specification.
On 20 November 2019, Jiangsu Huacun presented the first PCIe 5.0 Controller HC9001 in a 12 nm manufacturing process. Production started in 2020.
On 17 August 2020, IBM announced the Power10 processor with PCIe 5.0 and up to 32 lanes per single-chip module (SCM) and up to 64 lanes per double-chip module (DCM).
On 9 September 2021, IBM announced the Power E1080 Enterprise server with planned availability date 17 September. It can have up to 16 Power10 SCMs with maximum of 32 slots per system which can act as PCIe 5.0 x8 or PCIe 4.0 x16. Alternatively they can be used as PCIe 5.0 x16 slots for optional optical CXP converter adapters connecting to external PCIe expansion drawers.
On 27 October 2021, Intel announced the 12th Gen Intel Core CPU family, the world's first consumer x86-64 processors with PCIe 5.0 (up to 16 lanes) connectivity.
On 22 March 2022, Nvidia announced Nvidia Hopper GH100 GPU, the world's first PCIe 5.0 GPU.
On 23 May 2022, AMD announced its Zen 4 architecture with support for up to 24 lanes of PCIe 5.0 connectivity on consumer platforms and 128 lanes on server platforms.
PCI Express 6.0
On 18 June 2019, PCI-SIG announced the development of PCI Express 6.0 specification. Bandwidth is expected to increase to 64GT/s, yielding 128GB/s in each direction in a 16-lane configuration, with a target release date of 2021. The new standard uses 4-level pulse-amplitude modulation (PAM-4) with a low-latency forward error correction (FEC) in place of non-return-to-zero (NRZ) modulation. Unlike previous PCI Express versions, forward error correction is used to increase data integrity and PAM-4 is used as line code so that two bits are transferred per transfer. With 64GT/s data transfer rate (raw bit rate), up to 121GB/s in each direction is possible in x16 configuration.
On 24 February 2020, the PCI Express 6.0 revision 0.5 specification (a "first draft" with all architectural aspects and requirements defined) was released.
On 5 November 2020, the PCI Express 6.0 revision 0.7 specification (a "complete draft" with electrical specifications validated via test chips) was released.
On 6 October 2021, the PCI Express 6.0 revision 0.9 specification (a "final draft") was released.
On 11 January 2022, PCI-SIG officially announced the release of the final PCI Express 6.0 specification.
On 18 March 2024, Nvidia announced Nvidia Blackwell GB100 GPU, the world's first PCIe 6.0 GPU.
PAM-4 coding results in a vastly higher bit error rate (BER) of 10−6 (vs. 10−12 previously), so in place of 128b/130b encoding, a 3-way interlaced forward error correction (FEC) is used in addition to cyclic redundancy check (CRC). A fixed 256 byte Flow Control Unit (FLIT) block carries 242 bytes of data, which includes variable-sized transaction level packets (TLP) and data link layer payload (DLLP); remaining 14 bytes are reserved for 8-byte CRC and 6-byte FEC. 3-way Gray code is used in PAM-4/FLIT mode to reduce error rate; the interface does not switch to NRZ and 128/130b encoding even when retraining to lower data rates.
PCI Express 7.0
On 21 June 2022, PCI-SIG announced the development of PCI Express 7.0 specification. It will deliver 128 GT/s raw bit rate and up to 242 GB/s per direction in x16 configuration, using the same PAM4 signaling as version 6.0. Doubling of the data rate will be achieved by fine-tuning channel parameters to decrease signal losses and improve power efficiency, but signal integrity is expected to be a challenge. The specification is expected to be finalized in 2025.
On 2 April 2024, PCI-SIG announced the release of PCIe 7.0 specification version 0.5; PCI Express 7.0 remains on track for release in 2025.
Extensions and future directions
Some vendors offer PCIe over fiber products, with active optical cables (AOC) for PCIe switching at increased distance in PCIe expansion drawers, or in specific cases where transparent PCIe bridging is preferable to using a more mainstream standard (such as InfiniBand or Ethernet) that may require additional software to support it.
Thunderbolt was co-developed by Intel and Apple as a general-purpose high speed interface combining a logical PCIe link with DisplayPort and was originally intended as an all-fiber interface, but due to early difficulties in creating a consumer-friendly fiber interconnect, nearly all implementations are copper systems. A notable exception, the Sony VAIO Z VPC-Z2, uses a nonstandard USB port with an optical component to connect to an outboard PCIe display adapter. Apple has been the primary driver of Thunderbolt adoption through 2011, though several other vendors have announced new products and systems featuring Thunderbolt. Thunderbolt 3 forms the basis of the USB4 standard.
Mobile PCIe specification (abbreviated to M-PCIe) allows PCI Express architecture to operate over the MIPI Alliance's M-PHY physical layer technology. Building on top of already existing widespread adoption of M-PHY and its low-power design, Mobile PCIe lets mobile devices use PCI Express.
Draft process
There are 5 primary releases/checkpoints in a PCI-SIG specification:
Draft 0.3 (Concept): this release may have few details, but outlines the general approach and goals.
Draft 0.5 (First draft): this release has a complete set of architectural requirements and must fully address the goals set out in the 0.3 draft.
Draft 0.7 (Complete draft): this release must have a complete set of functional requirements and methods defined, and no new functionality may be added to the specification after this release. Before the release of this draft, electrical specifications must have been validated via test silicon.
Draft 0.9 (Final draft): this release allows PCI-SIG member companies to perform an internal review for intellectual property, and no functional changes are permitted after this draft.
1.0 (Final release): this is the final and definitive specification, and any changes or enhancements are through Errata documentation and Engineering Change Notices (ECNs) respectively.
Historically, the earliest adopters of a new PCIe specification generally begin designing with the Draft 0.5 as they can confidently build up their application logic around the new bandwidth definition and often even start developing for any new protocol features. At the Draft 0.5 stage, however, there is still a strong likelihood of changes in the actual PCIe protocol layer implementation, so designers responsible for developing these blocks internally may be more hesitant to begin work than those using interface IP from external sources.
Hardware protocol summary
The PCIe link is built around dedicated unidirectional couples of serial (1-bit), point-to-point connections known as lanes. This is in sharp contrast to the earlier PCI connection, which is a bus-based system where all the devices share the same bidirectional, 32-bit or 64-bit parallel bus.
PCI Express is a layered protocol, consisting of a transaction layer, a data link layer, and a physical layer. The Data Link Layer is subdivided to include a media access control (MAC) sublayer. The Physical Layer is subdivided into logical and electrical sublayers. The Physical logical-sublayer contains a physical coding sublayer (PCS). The terms are borrowed from the IEEE 802 networking protocol model.
Physical layer
The PCIe Physical Layer (PHY, PCIEPHY, PCI Express PHY, or PCIe PHY) specification is divided into two sub-layers, corresponding to electrical and logical specifications. The logical sublayer is sometimes further divided into a MAC sublayer and a PCS, although this division is not formally part of the PCIe specification. A specification published by Intel, the PHY Interface for PCI Express (PIPE), defines the MAC/PCS functional partitioning and the interface between these two sub-layers. The PIPE specification also identifies the physical media attachment (PMA) layer, which includes the serializer/deserializer (SerDes) and other analog circuitry; however, since SerDes implementations vary greatly among ASIC vendors, PIPE does not specify an interface between the PCS and PMA.
At the electrical level, each lane consists of two unidirectional differential pairs operating at 2.5, 5, 8, 16 or 32 Gbit/s, depending on the negotiated capabilities. Transmit and receive are separate differential pairs, for a total of four data wires per lane.
A connection between any two PCIe devices is known as a link, and is built up from a collection of one or more lanes. All devices must minimally support single-lane (x1) link. Devices may optionally support wider links composed of up to 32 lanes. This allows for very good compatibility in two ways:
A PCIe card physically fits (and works correctly) in any slot that is at least as large as it is (e.g., a x1 sized card works in any sized slot);
A slot of a large physical size (e.g., x16) can be wired electrically with fewer lanes (e.g., x1, x4, x8, or x12) as long as it provides the ground connections required by the larger physical slot size.
In both cases, PCIe negotiates the highest mutually supported number of lanes. Many graphics cards, motherboards and BIOS versions are verified to support x1, x4, x8 and x16 connectivity on the same connection.
The width of a PCIe connector is 8.8 mm, while the height is 11.25 mm, and the length is variable. The fixed section of the connector is 11.65 mm in length and contains two rows of 11 pins each (22 pins total), while the length of the other section is variable depending on the number of lanes. The pins are spaced at 1 mm intervals, and the thickness of the card going into the connector is 1.6 mm.
Data transmission
PCIe sends all control messages, including interrupts, over the same links used for data. The serial protocol can never be blocked, so latency is still comparable to conventional PCI, which has dedicated interrupt lines. When the problem of IRQ sharing of pin based interrupts is taken into account and the fact that message signaled interrupts (MSI) can bypass an I/O APIC and be delivered to the CPU directly, MSI performance ends up being substantially better.
Data transmitted on multiple-lane links is interleaved, meaning that each successive byte is sent down successive lanes. The PCIe specification refers to this interleaving as data striping. While requiring significant hardware complexity to synchronize (or deskew) the incoming striped data, striping can significantly reduce the latency of the nth byte on a link. While the lanes are not tightly synchronized, there is a limit to the lane to lane skew of 20/8/6 ns for 2.5/5/8 GT/s so the hardware buffers can re-align the striped data. Due to padding requirements, striping may not necessarily reduce the latency of small data packets on a link.
As with other high data rate serial transmission protocols, the clock is embedded in the signal. At the physical level, PCI Express 2.0 utilizes the 8b/10b encoding scheme (line code) to ensure that strings of consecutive identical digits (zeros or ones) are limited in length. This coding was used to prevent the receiver from losing track of where the bit edges are. In this coding scheme every eight (uncoded) payload bits of data are replaced with 10 (encoded) bits of transmit data, causing a 20% overhead in the electrical bandwidth. To improve the available bandwidth, PCI Express version 3.0 instead uses 128b/130b encoding (1.54% overhead). Line encoding limits the run length of identical-digit strings in data streams and ensures the receiver stays synchronised to the transmitter via clock recovery.
A desirable balance (and therefore spectral density) of 0 and 1 bits in the data stream is achieved by XORing a known binary polynomial as a "scrambler" to the data stream in a feedback topology. Because the scrambling polynomial is known, the data can be recovered by applying the XOR a second time. Both the scrambling and descrambling steps are carried out in hardware.
Dual simplex in PCIe means there are two simplex channels on every PCIe lane. Simplex means communication is only possible in one direction. By having two simplex channels, two-way communication is made possible. One differential pair is used for each channel.
Data link layer
The data link layer performs three vital services for the PCIe link:
sequence the transaction layer packets (TLPs) that are generated by the transaction layer,
ensure reliable delivery of TLPs between two endpoints via an acknowledgement protocol (ACK and NAK signaling) that explicitly requires replay of unacknowledged/bad TLPs,
initialize and manage flow control credits
On the transmit side, the data link layer generates an incrementing sequence number for each outgoing TLP. It serves as a unique identification tag for each transmitted TLP, and is inserted into the header of the outgoing TLP. A 32-bit cyclic redundancy check code (known in this context as Link CRC or LCRC) is also appended to the end of each outgoing TLP.
On the receive side, the received TLP's LCRC and sequence number are both validated in the link layer. If either the LCRC check fails (indicating a data error), or the sequence-number is out of range (non-consecutive from the last valid received TLP), then the bad TLP, as well as any TLPs received after the bad TLP, are considered invalid and discarded. The receiver sends a negative acknowledgement message (NAK) with the sequence-number of the invalid TLP, requesting re-transmission of all TLPs forward of that sequence-number. If the received TLP passes the LCRC check and has the correct sequence number, it is treated as valid. The link receiver increments the sequence-number (which tracks the last received good TLP), and forwards the valid TLP to the receiver's transaction layer. An ACK message is sent to remote transmitter, indicating the TLP was successfully received (and by extension, all TLPs with past sequence-numbers.)
If the transmitter receives a NAK message, or no acknowledgement (NAK or ACK) is received until a timeout period expires, the transmitter must retransmit all TLPs that lack a positive acknowledgement (ACK). Barring a persistent malfunction of the device or transmission medium, the link-layer presents a reliable connection to the transaction layer, since the transmission protocol ensures delivery of TLPs over an unreliable medium.
In addition to sending and receiving TLPs generated by the transaction layer, the data-link layer also generates and consumes data link layer packets (DLLPs). ACK and NAK signals are communicated via DLLPs, as are some power management messages and flow control credit information (on behalf of the transaction layer).
In practice, the number of in-flight, unacknowledged TLPs on the link is limited by two factors: the size of the transmitter's replay buffer (which must store a copy of all transmitted TLPs until the remote receiver ACKs them), and the flow control credits issued by the receiver to a transmitter. PCI Express requires all receivers to issue a minimum number of credits, to guarantee a link allows sending PCIConfig TLPs and message TLPs.
Transaction layer
PCI Express implements split transactions (transactions with request and response separated by time), allowing the link to carry other traffic while the target device gathers data for the response.
PCI Express uses credit-based flow control. In this scheme, a device advertises an initial amount of credit for each received buffer in its transaction layer. The device at the
opposite end of the link, when sending transactions to this device, counts the number of credits each TLP consumes from its account. The sending device may only transmit a TLP when doing so does not make its consumed credit count exceed its credit limit. When the receiving device finishes processing the TLP from its buffer, it signals a return of credits to the sending device, which increases the credit limit by the restored amount. The credit counters are modular counters, and the comparison of consumed credits to credit limit requires modular arithmetic. The advantage of this scheme (compared to other methods such as wait states or handshake-based transfer protocols) is that the latency of credit return does not affect performance, provided that the credit limit is not encountered. This assumption is generally met if each device is designed with adequate buffer sizes.
PCIe 1.x is often quoted to support a data rate of 250 MB/s in each direction, per lane. This figure is a calculation from the physical signaling rate (2.5 gigabaud) divided by the encoding overhead (10 bits per byte). This means a sixteen lane (x16) PCIe card would then be theoretically capable of 16x250 MB/s = 4 GB/s in each direction. While this is correct in terms of data bytes, more meaningful calculations are based on the usable data payload rate, which depends on the profile of the traffic, which is a function of the high-level (software) application and intermediate protocol levels.
Like other high data rate serial interconnect systems, PCIe has a protocol and processing overhead due to the additional transfer robustness (CRC and acknowledgements). Long continuous unidirectional transfers (such as those typical in high-performance storage controllers) can approach >95% of PCIe's raw (lane) data rate. These transfers also benefit the most from increased number of lanes (x2, x4, etc.) But in more typical applications (such as a USB or Ethernet controller), the traffic profile is characterized as short data packets with frequent enforced acknowledgements. This type of traffic reduces the efficiency of the link, due to overhead from packet parsing and forced interrupts (either in the device's host interface or the PC's CPU). Being a protocol for devices connected to the same printed circuit board, it does not require the same tolerance for transmission errors as a protocol for communication over longer distances, and thus, this loss of efficiency is not particular to PCIe.
Efficiency of the link
As for any network-like communication links, some of the raw bandwidth is consumed by protocol overhead:
A PCIe 1.x lane for example offers a data rate on top of the physical layer of 250 MB/s (simplex). This is not the payload bandwidth but the physical layer bandwidth – a PCIe lane has to carry additional information for full functionality.
The Gen2 overhead is then 20, 24, or 28 bytes per transaction.
The Gen3 overhead is then 22, 26 or 30 bytes per transaction.
The for a 128 byte payload is 86%, and 98% for a 1024 byte payload. For small accesses like register settings (4 bytes), the efficiency drops as low as 16%.
The maximum payload size (MPS) is set on all devices based on smallest maximum on any device in the chain. If one device has an MPS of 128 bytes, all devices of the tree must set their MPS to 128 bytes. In this case the bus will have a peak efficiency of 86% for writes.
Applications
PCI Express operates in consumer, server, and industrial applications, as a motherboard-level interconnect (to link motherboard-mounted peripherals), a passive backplane interconnect and as an expansion card interface for add-in boards.
In virtually all modern () PCs, from consumer laptops and desktops to enterprise data servers, the PCIe bus serves as the primary motherboard-level interconnect, connecting the host system-processor with both integrated peripherals (surface-mounted ICs) and add-on peripherals (expansion cards). In most of these systems, the PCIe bus co-exists with one or more legacy PCI buses, for backward compatibility with the large body of legacy PCI peripherals.
, PCI Express has replaced AGP as the default interface for graphics cards on new systems. Almost all models of graphics cards released since 2010 by AMD (ATI) and Nvidia use PCI Express. Nvidia used the high-bandwidth data transfer of PCIe for its Scalable Link Interface (SLI) technology, which allowed multiple graphics cards of the same chipset and model number to run in tandem, allowing increased performance. This interface has, since, been discontinued. AMD has also developed a multi-GPU system based on PCIe called CrossFire. AMD, Nvidia, and Intel have released motherboard chipsets that support as many as four PCIe x16 slots, allowing tri-GPU and quad-GPU card configurations.
External GPUs
Theoretically, external PCIe could give a notebook the graphics power of a desktop, by connecting a notebook with any PCIe desktop video card (enclosed in its own external housing, with a power supply and cooling); this is possible with an ExpressCard or Thunderbolt interface. An ExpressCard interface provides bit rates of 5 Gbit/s (0.5 GB/s throughput), whereas a Thunderbolt interface provides bit rates of up to 40 Gbit/s (5 GB/s throughput).
In 2006, Nvidia developed the Quadro Plex external PCIe family of GPUs that can be used for advanced graphic applications for the professional market. These video cards require a PCI Express x8 or x16 slot for the host-side card, which connects to the Plex via a VHDCI carrying eight PCIe lanes.
In 2008, AMD announced the ATI XGP technology, based on a proprietary cabling system that is compatible with PCIe x8 signal transmissions. This connector is available on the Fujitsu Amilo and the Acer Ferrari One notebooks. Fujitsu launched their AMILO GraphicBooster enclosure for XGP soon thereafter. Around 2010 Acer launched the Dynavivid graphics dock for XGP.
In 2010, external card hubs were introduced that can connect to a laptop or desktop through a PCI ExpressCard slot. These hubs can accept full-sized graphics cards. Examples include MSI GUS, Village Instrument's ViDock, the Asus XG Station, Bplus PE4H V3.2 adapter, as well as more improvised DIY devices. However such solutions are limited by the size (often only x1) and version of the available PCIe slot on a laptop.
The Intel Thunderbolt interface has provided a new option to connect with a PCIe card externally. Magma has released the ExpressBox 3T, which can hold up to three PCIe cards (two at x8 and one at x4). MSI also released the Thunderbolt GUS II, a PCIe chassis dedicated for video cards. Other products such as the Sonnet's Echo Express and mLogic's mLink are Thunderbolt PCIe chassis in a smaller form factor.
In 2017, more fully featured external card hubs were introduced, such as the Razer Core, which has a full-length PCIe x16 interface.
Storage devices
The PCI Express protocol can be used as data interface to flash memory devices, such as memory cards and solid-state drives (SSDs).
The XQD card is a memory card format utilizing PCI Express, developed by the CompactFlash Association, with transfer rates of up to 1 GB/s.
Many high-performance, enterprise-class SSDs are designed as PCI Express RAID controller cards. Before NVMe was standardized, many of these cards utilized proprietary interfaces and custom drivers to communicate with the operating system; they had much higher transfer rates (over 1 GB/s) and IOPS (over one million I/O operations per second) when compared to Serial ATA or SAS drives. For example, in 2011 OCZ and Marvell co-developed a native PCI Express solid-state drive controller for a PCI Express 3.0 x16 slot with maximum capacity of 12 TB and a performance of to 7.2 GB/s sequential transfers and up to 2.52 million IOPS in random transfers.
SATA Express was an interface for connecting SSDs through SATA-compatible ports, optionally providing multiple PCI Express lanes as a pure PCI Express connection to the attached storage device. M.2 is a specification for internally mounted computer expansion cards and associated connectors, which also uses multiple PCI Express lanes.
PCI Express storage devices can implement both AHCI logical interface for backward compatibility, and NVM Express logical interface for much faster I/O operations provided by utilizing internal parallelism offered by such devices. Enterprise-class SSDs can also implement SCSI over PCI Express.
Cluster interconnect
Certain data-center applications (such as large computer clusters) require the use of fiber-optic interconnects due to the distance limitations inherent in copper cabling. Typically, a network-oriented standard such as Ethernet or Fibre Channel suffices for these applications, but in some cases the overhead introduced by routable protocols is undesirable and a lower-level interconnect, such as InfiniBand, RapidIO, or NUMAlink is needed. Local-bus standards such as PCIe and HyperTransport can in principle be used for this purpose, but , solutions are only available from niche vendors such as Dolphin ICS, and TTTech Auto.
Competing protocols
Other communications standards based on high bandwidth serial architectures include InfiniBand, RapidIO, HyperTransport, Intel QuickPath Interconnect, the Mobile Industry Processor Interface (MIPI), and NVLink. Differences are based on the trade-offs between flexibility and extensibility vs latency and overhead. For example, making the system hot-pluggable, as with Infiniband but not PCI Express, requires that software track network topology changes.
Another example is making the packets shorter to decrease latency (as is required if a bus must operate as a memory interface). Smaller packets mean packet headers consume a higher percentage of the packet, thus decreasing the effective bandwidth. Examples of bus protocols designed for this purpose are RapidIO and HyperTransport.
targeted by design as a system interconnect (local bus) rather than a device interconnect or routed network protocol. Additionally, its design goal of software transparency constrains the protocol and raises its latency somewhat.
Delays in PCIe 4.0 implementations led to the Gen-Z consortium, the CCIX effort and an open Coherent Accelerator Processor Interface (CAPI) all being announced by the end of 2016.
On 11 March 2019, Intel presented Compute Express Link (CXL), a new interconnect bus, based on the PCI Express 5.0 physical layer infrastructure. The initial promoters of the CXL specification included: Alibaba, Cisco, Dell EMC, Facebook, Google, HPE, Huawei, Intel and Microsoft.
Integrators list
The PCI-SIG Integrators List lists products made by PCI-SIG member companies that have passed compliance testing. The list include switches, bridges, NICs, SSDs, etc.
See also
Active State Power Management (ASPM)
Peripheral Component Interconnect (PCI)
PCI configuration space
PCI-X (PCI Extended)
PCI/104-Express
PCIe/104
Root complex
Serial Digital Video Out (SDVO)
UCIe
Compute Express Link (CXL)
Notes
References
Further reading
, 1120 pp.
, 1056 pp.
, 325 pp.
External links
PCI-SIG Specifications
Computer-related introductions in 2004
Peripheral Component Interconnect
Serial buses
Computer standards
Motherboard expansion slot | PCI Express | Technology | 12,596 |
28,723,663 | https://en.wikipedia.org/wiki/Corporate%20social%20media | Corporate social media is the use of social media platforms, social media communications and social media marketing techniques by and within corporations, ranging from small businesses and tiny entrepreneurial startups to mid-size businesses and huge multinational firms. Within the definition of social media, there are different ways corporations utilize it. Although there is no systematic way in which social media applications can be categorized, there are various methods and approaches to having a strong social media presence.
Social media currently can be crucial to the success of growing numbers in a companies value chain activities. For marketers, Social media is a mandatory element within the promotional mix. Marketers also need to understand that marketing on social media can come with difficulties and challenges, and face both reputation and economic risks. This big push to move to Social Media to is thought to create a better experience with the consumers, as corporations are able to target specific content to their target audience. Another benefit for corporations through usage of media is that this will attract more people, and in return also create a more well known brand.
History and development
In the 2010s, an increasing number of corporations, across most industries, have adopted the use of social media either within the workplace, for employees, as part of an Intranet or using the publicly available Internet. As a result, corporate use of social networking and micro blogging sites such as Facebook, Twitter, Pinterest, and LinkedIn, has substantially increased.
A 2010 report indicated that two-thirds of companies had or would have social media initiatives in place.
According to an article by the Harvard Business Review from 2014, "Fifty-eight percent of companies are currently engaged in social networks like Facebook, micro blogs like Twitter, and sharing multimedia on platforms such as YouTube." The Harvard Business Review cites an additional 21 % of companies as being in the process of implementing a formal social media initiative. The 2014 HBR report indicates 79 % of companies have or will have social media initiatives in place.
According to research conducted in 2021, 91.9 percent of marketing employees working for large corporations (100 or more people) use social media on a daily basis in their jobs. This statistic has changed a lot over the years, and continues to grow.
Budgeting and corporate roles
Budgets for utilizing corporate social media is growing every year by millions of dollars. Jobs like social media managers and coordinators have made it so this is an entire department of a company. It goes hand in hand with the marketing, communications, and PR teams in order to optimize strategies for the corporation to be connected to their audience.
Types
Aichner and Jacob (2015) give the following typology:
Policies
Social media has grown rapidly over the last decade and has become an integral component of business models. Because of the global use of social media, corporations are developing and implementing formal written policies for how their corporation will present itself on social media. In addition to this, corporations are often conscious about how their employees present themselves and their company on social media. Before social media, a company had complete control with what they communicated to the public. Now, virtually any employee can speak on behalf of the company, even without proper permission or following protocol. This can create conflict between corporate policy and those in decision making roles versus employees. For example, the Federal Financial Institutions Examination Council, a consortium of bank and credit union regulators, implemented in December 2013, formal social media guidance for its banks and credit unions. In the eyes of regulators, risks associated with social media use are of a level that requires formal attention. At a minimum, regulators require that organizations "listen" to what is being said about them on social media platforms in an effort to identify legal, compliance, and reputational concerns.
Corporations have legitimate concerns when it comes to their employees’ use of social media. Social media environments have created the need for distinct and often strict reputation management practices. Some corporations have resorted to monitoring the social media accounts of its employees in order to spot posts and comments that are related to workplace issues or the employer, potentially harmful to business or even leak private corporate information.
Many corporations have used social media during the hiring process as well. Survey data shows that within a one-year period 15 percent of finance and accounting professionals found new jobs through social media. Social media can be both helpful and detrimental to those searching for employment. Hiring managers sometimes search social media to look for reasons not to hire a job applicant. According to a 2013 survey from CareerBuilder.com, 43 percent of employers use social networking sites to research potential hires. Another 45 percent are researching the "fit" of a job candidate with their company by conducting a search via Google or another search engine. 51 percent of employers who research candidates on social media say they've found postings which have caused them to not hire a candidate. Job applicants who have racist or homophobic jokes, inappropriate photos, offensive content, or photos depicting drunkenness or other potentially undesirable behaviors may be screened out of hiring processes. Some observers have stated that employer viewing of job candidates' social media profiles may raise privacy concerns.
Benefits and risks
Despite there being risks to consider when utilizing social media, corporations are identifying the benefits associated with adopting a comprehensive corporate social media strategy. Benefits include lower cost and more effective, personal, and engaging marketing and advertising initiatives (as compared with traditional marketing methods such as billboard ads and TV commercials), improved internal and external corporate communications, enhanced overall brand awareness, and better operational efficiency and innovativeness. As a result, corporations are investing at an increasing rate in social media software and external services to strengthen their online presence. The belief is that the benefits outweigh the potential risks of bad press, customer complaints, and brand bashing. Benefits also include being able to interact one on one with the consumers and talking directly to them through social media platforms. This creates trust in businesses and gives customers more chances to build loyalty and commitment to a brand.
Conversely, businesses can find themselves in a bad situation when they use social media poorly. An example of poor social media execution came in November 2013 when JP Morgan decided to have a question and answer session via Twitter. During that time, 2 out of 3 tweets received were negative due to prior scrutiny they had faced. In this case, using social media and interacting with the public did not help to promote them in a positive way. Another example came on September 11, 2013, when AT&T posted a picture on Twitter of a cell phone capturing a picture of the Twin Towers memorial lights with the caption "Never forget." The tweet was met with great backlash from consumers for using a tragedy as a marketing opportunity, with many customers threatening to leave AT&T. After seeing the backlash it was receiving, AT&T removed the post and apologized within about an hour of its posting. Risks also include, losing the interest of the people on social media because there is a lack of activity, the content is not interesting, or it is not professional or honest.
See also
Enterprise social networking
Enterprise social software
Social media use by businesses
References
Further reading
Navigating Social Media Legal Risks: Safeguarding Your Business
2011 Fortune 500 – UMass Dartmouth
Big Bird Tweets: How corporations use social media to gauge public persona - Computerworld
Social media is reinventing how business is done – USATODAY.com
How To Use Social Media To Promote Your Small Business – Forbes
Brand Ambassadors in the Age of Social Media
Social media
Public relations
Social information processing
Promotion and marketing communications | Corporate social media | Technology | 1,512 |
7,986,007 | https://en.wikipedia.org/wiki/Q-Vandermonde%20identity | In mathematics, in the field of combinatorics, the q-Vandermonde identity is a q-analogue of the Chu–Vandermonde identity. Using standard notation for q-binomial coefficients, the identity states that
The nonzero contributions to this sum come from values of j such that the q-binomial coefficients on the right side are nonzero, that is,
Other conventions
As is typical for q-analogues, the q-Vandermonde identity can be rewritten in a number of ways. In the conventions common in applications to quantum groups, a different q-binomial coefficient is used. This q-binomial coefficient, which we denote here by , is defined by
In particular, it is the unique shift of the "usual" q-binomial coefficient by a power of q such that the result is symmetric in q and . Using this q-binomial coefficient, the q-Vandermonde identity can be written in the form
Proof
As with the (non-q) Chu–Vandermonde identity, there are several possible proofs of the q-Vandermonde identity. The following proof uses the q-binomial theorem.
One standard proof of the Chu–Vandermonde identity is to expand the product in two different ways. Following Stanley, we can tweak this proof to prove the q-Vandermonde identity, as well. First, observe that the product
can be expanded by the q-binomial theorem as
Less obviously, we can write
and we may expand both subproducts separately using the q-binomial theorem. This yields
Multiplying this latter product out and combining like terms gives
Finally, equating powers of between the two expressions yields the desired result.
This argument may also be phrased in terms of expanding the product in two different ways, where A and B are operators (for example, a pair of matrices) that "q-commute," that is, that satisfy BA = qAB.
Notes
References
Exton, H. (1983), q-Hypergeometric Functions and Applications, New York: Halstead Press, Chichester: Ellis Horwood, 1983, , ,
Combinatorics
Q-analogs
Mathematical identities | Q-Vandermonde identity | Mathematics | 470 |
10,217,400 | https://en.wikipedia.org/wiki/Management%20features%20new%20to%20Windows%20Vista | Windows Vista contains a range of new technologies and features that are intended to help network administrators and power users better manage their systems. Notable changes include a complete replacement of both the Windows Setup and the Windows startup processes, completely rewritten deployment mechanisms, new diagnostic and health monitoring tools such as random access memory diagnostic program, support for per-application Remote Desktop sessions, a completely new Task Scheduler, and a range of new Group Policy settings covering many of the features new to Windows Vista. Subsystem for UNIX Applications, which provides a POSIX-compatible environment is also introduced.
Setup
The setup process for Windows Vista has been completely rewritten and is now image-based instead of being sector-based as previous versions of Windows were. The Windows Preinstallation Environment (WinPE) has been updated to host the entire setup process in a graphical environment (as opposed to text-based environments of previous versions of Windows), which allows the use of input devices other than the keyboard throughout the entire setup process. The new interface resembles Windows Vista itself, with features such as ClearType fonts and Windows Aero visual effects. Prior to copying the setup image to disk, users can create, format, and graphically resize disk partitions. The new image-based setup also reduces the duration of the installation procedure when contrasted with Windows XP; Microsoft estimates that Windows Vista can install in as few as 20 minutes despite being more than three times the size of its predecessor.
Windows XP only supported loading storage drivers from floppy diskettes during initialization of the setup process; Windows Vista supports loading drivers for SATA, SCSI, and RAID controllers from any external source in addition to floppy diskettes prior to its installation.
At the end of the setup process, Windows Vista can also automatically download and apply security and device-driver updates from Windows Update. Previous versions of Windows could only configure updates to be installed after the operating system installation.
System recovery
The new Windows Recovery Environment (WinRE) detects and repairs various operating system problems; it presents a set of options dedicated to diagnostics including Startup Repair, System Restore, Backup and Restore, Windows Memory Diagnostics Tool, Command Prompt, and options specific to original equipment manufacturers. WinRE is accessible by pressing during operating system boot or by booting from a Windows installation source such as optical media.
Startup Repair
Startup Repair (formerly System Recovery Troubleshooter Wizard) is a diagnostic feature designed to repair systems that cannot boot due to operating system corruption, incompatible drivers, or damaged hardware; it scans for corruption of operating system components such as Boot Configuration Data and the Windows Registry and also checks boot sectors, file system metadata, Master Boot Records, and partition tables for errors and whether the root cause for failure originated during an installation of Windows. Microsoft designed Startup Repair to repair over eighty percent of issues that users may experience. Windows Vista Service Pack 1 enhances Startup Repair to replace additional system files during the repair process that may be damaged or missing due to corruption.
Component Based Servicing
Package Manager, part of the Windows Vista servicing stack, replaces the previous Package Installer (Update.exe) and Update Installer (Hotfix.exe). Microsoft delivers updates for Windows Vista as files and resources only. Package Manager, Windows Update, and the Control Panel item to turn Windows features on and off, all use the Windows Vista servicing stack. Package Manager can also install updates to an offline Windows image, including updates, boot-critical device drivers, and language packs.
Windows Vista introduced Component-Based Servicing (CBS) as an architecture for installation and servicing.
Deployment
The deployment of Windows Vista uses a hardware-independent image, the Windows Imaging Format (WIM). The image file contains the necessary bits of the operating system, and its contents are copied as is to the target system. Other system specific software, such as device drivers and other applications, are installed and configured afterwards. This reduces the time taken for installation of Windows Vista.
Corporations can author their own image files (using the WIM format) which might include all the applications that the organization wants to deploy. Also multiple images can be kept in a single image file, to target multiple scenarios. This ability is used by Microsoft to include all editions of Windows Vista on the same disc, and install the proper version based on the provided product key. In addition, initial configuration, such as locale settings, account names, etc. can be supplied in XML Answer Files to automate installation.
Microsoft provides a tool called ImageX to support creation of custom images, and edit images after they have been created. It can also be used to generate an image from a running installation, including all data and applications, for backup purposes. WIM images can also be controlled using the Windows System Image Manager, which can be used to edit images and to create XML Answer Files for unattended installations. Sysprep is also included as part of Windows Vista, and is HAL-independent.
Also included in Windows Vista is an improved version of the Files and Settings Transfer Wizard now known as Windows Easy Transfer which allows settings to be inherited from previous installations. User State Migration Tool allows migrating user accounts during large automated deployments.
ClickOnce is a deployment technology for "smart client" applications that enables self-updating Windows-based applications that can be installed and run with minimal user interaction, and in a fashion that does not require administrator access.
The ActiveX Installer Service is an optional component included with the Business, Enterprise and Ultimate editions that provides a method for network administrators in a domain to authorize the installation and upgrade of specific ActiveX controls while operating as a standard user. ActiveX components that have been listed in Group Policy can be installed without a User Account Control consent dialog being displayed.
Event logging and reporting
Windows Vista includes a number of self-diagnostic features which help identify various problems and, if possible, suggest corrective actions. The event logging subsystem in Windows Vista also has been completely overhauled and rewritten around XML to allow applications to more precisely log events. Event Viewer has also been rewritten to take advantage of these new features. There are a large number of different types of event logs that can be monitored including Administrative, Operational, Analytic, and Debug log types. For instance, selecting the Application Logs node in the Scope pane reveals numerous new subcategorized event logs, including many labeled as diagnostic logs. Event logs can now be configured to be automatically forwarded to other systems running Windows Vista or Windows Server 2008. Event logs can also be remotely viewed from other computers or multiple event logs can be centrally logged and managed from a single computer. Event logs can be filtered by one or more criteria, and custom views can be created for one or more events. Such categorizing and advanced filtering allows viewing logs related only to a certain subsystem or an issue with only a certain component. Events can also be directly associated with tasks, via the redesigned Event Viewer.
Windows Error Reporting
Windows Error Reporting has been improved significantly in Windows Vista. Most importantly a new set of public APIs have been created for reporting failures other than application crashes and hangs. Developers can create custom reports and customize the reporting user interface. The new APIs are documented in MSDN. The architecture of Windows Error Reporting has been revamped with a focus on reliability and user experience. WER can now report errors even when the process is in a very bad state for example if the process has encountered stack exhaustions, PEB/TEB corruptions, heap corruptions etc. In Windows XP, the process terminated silently without generating an error report in these conditions.
A new feature called Problem Reports and Solutions has also been added. It is a Control Panel applet that keeps a record of all system and application errors and issues, as well as presents probable solutions to problems.
Diagnostics and performance
Windows Vista introduces major diagnostic capabilities, which include new feature additions for monitoring performance and for reporting issues:
A new Performance Information and Tools Control Panel applet includes details and features related to performance.
A new Resource Monitor includes System Stability Reports that graph daily events such as application crashes and hangs, device driver and hardware issues, software installations, and system crashes on a System Stability Chart so that users can view system performance over time. Generic details on the chart are signified by an information sign; errors are denoted by a red hazard symbol and potential issues are denoted by a yellow caution symbol. System Stability Reports assign a daily System Reliability Index, with a value of 10.0 indicating no problems; if an issue occurs, the daily value of a system will decrease, but it will gradually increase with each subsequent day where no issue has occurred. Users can view the history of System Stability Reports. The Resource Monitor uses system statistics from the Reliability Analysis Component (RAC).
A Program Compatibility Assistant automatically detects known application compatibility issues (such as conflicts with User Account Control) and presents options for problem resolution; the Program Compatibility Wizard that allows for manually changing compatibility settings is still available.
Client performance degradation such as application or driver interference with power transitions, increased boot, hibernation, or resume times, and reduced system performance due to system visual settings are monitored and reported to the user with options for problem resolution.
Disk Diagnostics detect impending hard disk drive failures and prompt the user to perform backups, and to repair or replace the hard disk drive after Windows Vista detects a hard disk problem.
Memory Diagnostics (comprising the Windows Memory Diagnostics Tool) check for issues caused by random access memory modules.
Network Diagnostics, part of an extensible Network Diagnostics Framework, check for network connection problems and repair most of them automatically; options for resolution are presented when a problem is not repaired automatically. With the release of Service Pack 1, Network Diagnostics can also solve the most common file sharing problems.
Resource Exhaustion Prevention can detect when memory is low and determine which applications are causing this. A memory leak diagnostic can provide information about application that may have memory leaks.
Performance Monitor includes several new performance counters and various tools for tuning and monitoring system performance and resources. It shows the activities of the CPU, disk I/O, network, memory and other resources in the "Resource View". It supports new graph types, the selection of multiple counters, the retrieval of counter values from a point on the graph, the saving of graphed counter values to a log file, and the option to have a line graph continuously scroll in the graph window instead of wrapping-around on itself.
When run from an elevated command prompt, the perfmon /report command and parameter produce a comprehensive System Diagnostics Report complete with details such as hard disk throughout and Wi-Fi performance.
When users attach an external storage device with potential file system errors, the user will be prompted to scan for and fix the file system corruption (Do you want to scan and fix Removable Disk?).
When Windows is rebooted after an unexpected shutdown (such as those caused by a blue screen of death), the user is informed that the shutdown was unexpected (Windows has recovered from an unexpected shutdown) and is provided an option to report the incident to Microsoft for problem analysis and resolution.
Task Manager presents more detailed system information and monitoring. Memory consumption is now displayed as a percentage value instead of as separate commit charge values. A Services page displays all services, with descriptions, names, process IDs, groups, and statuses, and there are Go To Process, Start Service, and Stop Service context menu options. The following changes were also made to Task Manager pages:
The Applications page includes a new Create Dump File context menu option
The Performance page includes an option to open the new Resource Monitor, and now shows memory usage (in addition to page file usage) and system uptime
The Processes page includes new Command Line, Description, Data Execution Prevention, Image Path Name, and Virtualization column options
The Processes page also includes new Open File Location and Properties context menu options
Unresponsive application windows receive visual treatment — they are superimposed with window frosting — to indicate the application has ceased to respond.
Windows Vista contains diagnostic tracing hooks around plug and play operations; because of this users can, for example, view devices that have failed to start, or view unsuccessful plug and play operations such as failed ejections of removable storage devices, with information about the application path, process id, and veto time of the application that caused the ejection to fail.
Windows Vista introduces a new help and support architecture and interface based on the Assistance Platform client and MAML; the new architecture is not backward-compatible with previous versions of Windows.
Remote management
Remote Desktop Protocol 6.0 incorporates support for application-level remoting, improved security (TLS 1.0), support for connections via an SSL gateway, improved remoting of devices, support for .NET remoting including support for remoting of Windows Presentation Foundation applications, WMI scripting, 32-bit color support, dual-monitor support, Network Level Authentication and more.
Remote Assistance, which helps in troubleshooting remotely, is now a full-fledged standalone application and does not use the Help and Support Center or Windows Messenger. It is now based on the Windows Desktop Sharing API. Two administrators can connect to a remote computer simultaneously. Also, a session automatically reconnects after restarting the computer. It also supports session pausing, built-in diagnostics, and XML-based logging. It has been reworked to use less bandwidth for low-speed connections. NAT traversals are also supported, so a session can be established even if the user is behind a NAT device. Remote Assistance is configurable using Group Policy and supports command-line switches so that custom shortcuts can be deployed.
Windows Vista also includes Windows Remote Management (WinRM), which is Microsoft's implementation of WS-Management standard which allows remote computers to be easily managed through a SOAP-based web service. WinRM allows obtaining data (including WMI and other management information) from local and remote computers running Windows XP and Windows Server 2003 (if WinRM is installed on those computers), Windows Server 2008 and all WS-Management protocol implementations on other operating systems. Using WinRM scripting objects along with compatible command-line tools (WinRM or WinRS), allows administrators to remotely run management scripts. A WinRM session is authenticated to minimize security risks.
System tools
New /B switch in CHKDSK for NTFS volumes which clears marked bad sectors on a volume and reevaluates them.
Windows System Assessment Tool, a built-in benchmarking tool, analyzes the different subsystems (graphics, memory, etc.), produces a Windows Experience Index (formerly Windows Performance Rating) and uses the results to allow for comparison to other Windows Vista systems, and for software optimizations. The optimizations can be made by both Windows and third-party software.
Windows Backup (code-named SafeDocs) allows automatic backup of files, recovery of specific files and folders, recovery of specific file types, or recovery of all files. With Windows Vista Business, Enterprise or Ultimate, the entire disk can be backed up to a Complete PC Backup and Restore image and restored when required. Complete PC Restore can be initiated from within Windows Vista, or from the Windows Vista installation disc in the event that Windows cannot start up normally from the hard disk. Backups are created in Virtual PC format and therefore can be mounted using Microsoft Virtual PC. The Backup and Restore Center gives users the ability to schedule periodic backups of files on their computer, as well as recovery from previous backups.
Windows Update has been revised, and now runs completely as a control panel application, not as a web application as in prior versions of Windows.
System Restore is now based on Shadow Copy technology instead of a file-based filter and is therefore more proactive at creating useful restore points. Restore points are now "volume-level", meaning that performing a restore will capture the state of an entire system at a point in time. These can also be restored using the Windows Recovery Environment when booting from the Windows Vista DVD, and an "undo" restore point can be created prior to a restore, in case a user wishes to return to the pre-restored state.
System File Checker is integrated with Windows Resource Protection which protects registry keys and folders too besides critical system files. Using sfc.exe, specific folder paths can be checked, including the Windows folder and the boot folder. Also, scans can be performed against an offline Windows installation folder to replace corrupt files, in case the Windows installation is not bootable. For performing offline scans, System File Checker must be run from another working installation of Windows Vista or a later operating system or from the Windows setup DVD which gives access to the Windows Recovery Environment.
System Configuration (MSConfig) allows configuring various switches for Windows Boot Manager and Boot Configuration Data. It can also launch a variety of tools, such as system information, network diagnostics etc. and enable or disable User Account Control.
Windows Installer 4.0 (MSI 4.0) includes support for features such as User Account Control, Restart Manager, and Multilingual User Interface.
Problem Reports and Solutions is a new control panel user interface for Windows Error Reporting which allows users to see previously sent problems and any solutions or additional information that is available.
Windows Task Manager has a new "Services" tab which gives access to the list of all Windows services, and offers the ability to start and stop any service as well as enable/disable the UAC file and registry virtualization of a process. Additionally, file properties, the full path and command line of started processes, and DEP status of processes can be viewed. It also allows creating a dump file which can be useful for debugging.
Disk Defragmenter can be configured to automatically defragment the hard drive on a regular basis. It features cancellable, low I/O priority, shadow copy-aware defragmentation. It can also defragment the NTFS Master File Table (MFT). The user interface has been simplified, with the color graph, progress indicator and other information such as file system, free space etc., being removed entirely. Chunks of data over 64MB in size will not be defragmented; Microsoft has stated that this is because there is no discernible performance benefit in doing so. The defragmenter is not based on an MMC snap-in. The command line utility defrag.exe offers more control over the defragmentation process. This utility can be used to defragment specific volumes and to just analyze volumes as the defragmenter would in Windows XP. Windows Vista Service Pack 1 adds back the ability to specify which volumes are to be defragmented to the GUI.
The Disk Management console has been improved to allow the creation and the resizing of disk volumes without any data loss. Partitions (volumes) can be resized before starting Windows Vista setup or after installation.
Group Policy settings let administrators set ACLs for the volume interface for disks, CD or DVD drives, tape and floppy disk drives, USB flash drives and other portable devices.
Management Console
Windows Vista includes Microsoft Management Console 3.0 (MMC), which introduced several enhancements, including support for writing .NET snap-ins using Windows Forms and running multiple tasks in parallel. In addition, snap-ins present their UI in a different thread than that in which the operation runs, thus keeping the snap-in responsive, even while doing a computationally intensive task.
The new MMC interface includes support for better graphics and as well as featuring a task pane that shows actions available for a snap-in, when it is selected. Task Scheduler and Windows Firewall are also thoroughly configurable through the management console.
Print Management enables centralized installation and management of all printers in an organization. It allows installation of network-attached printers to a group of clients simultaneously, and provides continually updated status information for the printers and print servers. It also supports finding printers needing operator attention by filtering the display of printers based on error conditions, such as out-of-paper, and can also send e-mail notifications or run scripts when a printer encounters the error condition.
Group Policy
Windows Vista introduces a new XML based file format, ADMX as a replacement for now legacy ADM files to manage Group Policy settings, as well as a new ADML file format for Administrative Templates. Windows Vista additionally introduces a Central Store for ADMX files; Group Policy tools use ADMX files in the Central Store, and these files are replicated to all domain controllers in a domain.
Windows Vista includes over 2400 options for Group Policy, many of which relate to its new features, and which allow administrators to specify configuration for connected groups of computers, especially in a . Windows Vista supports Multiple Local Group Policy Objects which allows setting different levels of Local Group Policy for individual users. A new XML based policy definition file format, known as ADMX has been introduced. ADMX files contain the configuration settings for individual Group Policy Objects (GPO). For domain based GPOs, the ADMX files can be centrally stored, and all computers on the domain will retrieve them to configure themselves, using the File Replication Service, which is used to replicate files on a configured system from a remote location. The Group Policy service is no longer attached with the Winlogon service, rather it runs as a service on its own. Group Policy event messages are now logged in the system event log. Group Policy uses Network Location Awareness to refresh the policy configuration as soon as a network configuration change is detected.
New categories for policy settings include power management, device installations, security settings, Internet Explorer settings, and printer settings, among others. Group Policy settings also need to be used, to enable two way communication filtering in the Windows Firewall, which by default enables only incoming data filtering. Printer settings can be used to install printers based on the network location. Whenever the user connects to a different network, the available printers are updated for the new network. Group Policy settings specify which printer is available on which network. Also, printer settings can be used to allow standard users to install printers. Group Policy can also be used for specifying quality of service (QoS) settings. Device installation settings can be used to prevent users from connecting external storage devices, as a means to prevent data theft.
Windows Vista improves Folder Redirection by introducing the ability to independently redirect up to 10 user profile sub-folders to a network location. Up to Windows XP, only the Application Data, Desktop, My Documents, My Pictures, and Start Menu folders can be redirected to a file server. There is also a Management Console snap-in in Windows Vista to allow users to configure Folder Redirection for clients running Windows Vista, Windows XP, and Windows 2000.
Task Scheduler
The redesigned Task Scheduler is now based on Management Console and can be used to automate management and configuration tasks. It already has a number of preconfigured system-level tasks scheduled to run at various times. In addition to time-based triggers, Task Scheduler also supports calendar and event-based triggers, such as starting a task when a particular event is logged to the event log, or even only when multiple events have occurred. Also, several tasks that are triggered by the same event can be configured to run either simultaneously or in a pre-determined chained sequence of a series of actions, instead of having to create multiple scheduled tasks. Tasks can also be configured to run based on system status such as being idle for a pre-configured amount of time, on startup, logoff, or only during or for a specified time. Tasks can be triggered by an XPath expression for filtering events from the Windows Event Log. Tasks can also be delayed for a specified time after the triggering event has occurred, or repeat until some other event occurs. Actions that need to be done if a task fails can also be configured. There are several actions defined across various categories of applications and components. Task Scheduler keeps a history log of all execution details of all the tasks. Other features of Task Scheduler include:
Several new actions: A task can be scheduled to send an e-mail, show a message box, start an executable, or fire a COM handler when it is triggered.
Task Scheduler schema: Task Scheduler allows creating and managing tasks through XML-formatted documents.
New security features, including using Credential Manager to store passwords for tasks on workgroup computers and using Active Directory for task credentials on domain-joined computers so that they cannot be retrieved easily. Also, scheduled tasks are executed in their own session, instead of the same session as system services or the current user.
Ability to wake up a machine remotely or using BIOS timer from sleep or hibernation to execute a scheduled task or run a previously scheduled task after a machine gets turned on.
Ability to attach tasks to events directly from the Event Viewer.
The Task Scheduler 2.0 API is now fully available to VBScript, JScript, PowerShell and other scripting languages.
Command-line tools
Several new command-line tools are included in Windows Vista. Several existing tools have also been updated and some of the tools from the Windows Resource Kit are now built-in into the operating system.
auditpol — Configure, create, back up and restore audit policies on any computer in the organization from the command line with verbose logging. Replaces auditusr.exe.
bcdedit — Create, delete, and reorder the bootloader (boot.ini is no longer used).
bitsadmin — BITS administration utility.
chglogon — Enable or disable session logins.
chgport — List or change COM port mappings for DOS application compatibility.
chgusr — Change install mode.
choice — Allows users to select one item from a list of choices and returns the index of the selected choice.
clip — Redirects output of command line tools to the Windows clipboard. This text output can then be pasted into other programs.
cmdkey — Creates, displays, and deletes stored user names and passwords from Credentials Manager.
diskpart — Expanded to support hard disks with the GUID Partition Table, USB media, and a new "shrink" command has been added which facilitates shrinking a pre-existing NTFS partition.
diskraid — Launches the Diskraid application.
dispdiag — Display diagnostics.
expand — Updated version of expand.exe that allows extracting .MSU files. MSU is a self-contained update format known as a 'Microsoft Update Standalone Installer'. MSU files use Intra-Package Delta (IPD) compression technology. IPD technology reduces the download size of an MSU file but still delivers a self-contained package that contains the updated files.
forfiles — Selects a file (or set of files) and executes a command on that file. This is helpful for batch jobs.
icacls — Updated version of cacls. Displays or modifies access control lists (ACLs) and DACLs of files and directories. It can also backup and restore them and set mandatory labels of an object for interaction with Mandatory Integrity Control.
iscsicli — Microsoft iSCSI Initiator.
mklink — create, modify and delete junctions, hard links, and symbolic links.
muiunattend — Multilingual User Interface unattend actions.
netcfg — WinPE network installer.
ocsetup — Windows optional component setup.
pkgmgr — Windows package manager.
pnpunattend — Audit system, unattended online driver install.
pnputil — Microsoft PnP Utility.
query — Query {Process|Session|TermServer|User}
quser — Display information about users logged on to the system.
robocopy — the next version of xcopy with additional features. Compared to the freely available TechNet Magazine version, (XP026), the Windows Vista version additionally supports /EFSRAW switch to copy encrypted files without decrypting them and /SL switch to copy symbolic links instead of their target.
rpcping — Pings a server using RPC.
setx — Creates or modifies environment variables in the user or system environment. Can set variables based on arguments, registry keys or file input.
sxstrace — WinSxS tracing utility.
takeown — Allows administrators to take ownership of a file for which access is denied.
timeout — Accepts a timeout parameter to wait for the specified time period (in seconds) or until any key is pressed. It also accepts a parameter to ignore the key press.
tracerpt — Microsoft TraceRpt.
waitfor — Sends, or waits for, a signal on a system. When /S is not specified, the signal will be broadcast to all the systems in a domain. If /S is specified, then the signal will be sent only to the specified system.
wbadmin — Backup command-line tool.
wecutil — Windows Event collector utility.
wevtutil — Windows Event command line utility.
where — Displays the location of files that match the search pattern. By default, the search is done along the current directory and in the paths specified by the PATH environment variable.
whoami — Can be used to get user name and group information along with the respective Security Identifiers (SID), privileges, logon identifier (logon ID) for the current user (access token) on the local system. i.e. the current logged on user. If no switch is specified, the tool displays the user name in NTLM format (domain\username).
winrm.cmd — Windows Remote Management command line utility.
winrs — Windows Remote Shell (WinRS) allows establishing secure Windows Remote Management sessions to multiple remote computers from a single console.
winsat — Windows System Assessment Tool command line.
Services for UNIX has been renamed Subsystem for UNIX-based Applications, and is included with the Enterprise and Ultimate editions of Windows Vista. Network File System (NFSv3) client support is also included. However, the utilities and SDK are required to be downloaded separately. Also, the server components from the SFU product line (namely Server for NFS, User Name Mapping, Server for NIS, Password Synchronization etc.) are not included.
Scripting
Windows Vista supports scripting and automation capabilities using Windows PowerShell, an object-oriented command-line shell, released by Microsoft, but not included with the operating system. Also, WMI classes expose all controllable features of the operating system, and can be accessed from scripting languages. 13 new WMI providers are included. In addition, DHTML coupled with scripting languages or even PowerShell can be used to create desktop gadgets; gadgets can also be created for configuration of various aspects of the system.
References
Windows Vista
Windows Vista | Management features new to Windows Vista | Technology | 6,376 |
7,652,984 | https://en.wikipedia.org/wiki/Suslin%20representation | In mathematics, a Suslin representation of a set of reals (more precisely, elements of Baire space) is a tree whose projection is that set of reals. More generally, a subset A of κω is λ-Suslin if there is a tree T on κ × λ such that A = p[T].
By a tree on κ × λ we mean a subset T ⊆ ⋃n<ω(κn × λn) closed under initial segments, and p[T] = { f∈κω | ∃g∈λω : (f,g) ∈ [T] } is the projection of T,
where [T] = { (f, g )∈κω × λω | ∀n < ω : (f |n, g |n) ∈ T } is the set of branches through T.
Since [T] is a closed set for the product topology on κω × λω where κ and λ are equipped with the discrete topology (and all closed sets in κω × λω come in this way from some tree on κ × λ), λ-Suslin subsets of κω are projections of closed subsets in κω × λω.
When one talks of Suslin sets without specifying the space, then one usually means Suslin subsets of R, which descriptive set theorists usually take to be the set ωω.
See also
Suslin cardinal
Suslin operation
External links
R. Ketchersid, The strength of an ω1-dense ideal on ω1 under CH, 2004.
Set theory | Suslin representation | Mathematics | 317 |
3,309,479 | https://en.wikipedia.org/wiki/Helioscope | A helioscope is an instrument used in observing the Sun and sunspots.
The helioscope was first used by Benedetto Castelli (1578-1643) and refined by Galileo Galilei (1564–1642). The method involves projecting an image of the sun onto a white sheet of paper suspended in a darkened room with the use of a telescope.
The first machina helioscopica or helioscope was designed by Christoph Scheiner (1575 –1650) to assist his sunspot observations.
In the context of modern astroparticle physics, the term helioscope can also refer to an experiment that seeks to observe hypothetical particles (such as the axion) produced inside the sun. Examples of such helioscope experiments searching for axions include the CERN Axion Solar Telescope and International Axion Observatory.
See also
Solar telescope
Heliometer
Spectroheliograph
Spectrohelioscope
References
Astronomical instruments
Telescope types | Helioscope | Astronomy | 198 |
24,300,221 | https://en.wikipedia.org/wiki/Fluminicola%20bipolaris | Fluminicola bipolaris is a fungal species in the family Papulosaceae of the Ascomycota. It was the only known species in the genus Fluminicola until 4 more new species were found in 2017 and 2021.
References
Sordariomycetes
Fungi described in 1998
Fungus species | Fluminicola bipolaris | Biology | 63 |
26,527,624 | https://en.wikipedia.org/wiki/Hilbert%20spectroscopy | Hilbert Spectroscopy uses Hilbert transforms to analyze broad spectrum signals from gigahertz to terahertz frequency radio. One suggested use is to quickly analyze liquids inside airport passenger luggage.
References
Spectroscopy
Signal processing
Security technology | Hilbert spectroscopy | Physics,Chemistry,Astronomy,Technology,Engineering | 44 |
3,250,345 | https://en.wikipedia.org/wiki/Neurolysis | Neurolysis is the application of physical or chemical agents to a nerve in order to cause a temporary degeneration of targeted nerve fibers. When the nerve fibers degenerate, it causes an interruption in the transmission of nerve signals. In the medical field, this is most commonly and advantageously used to alleviate pain in cancer patients.
The different types of neurolysis include celiac plexus neurolysis, endoscopic ultrasound guided neurolysis, and lumbar sympathetic neurolysis. Chemodenervation and nerve blocks are also associated with neurolysis.
Additionally, there is external neurolysis. Peripheral nerves move (glide) across bones and muscles. A peripheral nerve can be trapped by scarring of surrounding tissue which may lead to potential nerve damage or pain. An external neurolysis is when scar tissue is removed from around the nerve without entering the nerve itself.
Background
Neurolysis is a chemical ablation technique that is used to alleviate pain. Neurolysis is only used when the disease has progressed to a point where no other pain treatments are effective. A neurolytic agent such as alcohol, phenol, or glycerol is typically injected into the nervous system. Chemical neurolysis causes deconstructive fibrosis which then disrupts the sympathetic ganglia. This results in a reduction of pain signals being transmitted throughout the nerves. The effects generally last for three to six months.
Certain neurolysis techniques have been reported to be used in the early 1900s for the treatment of pain by the neurologist Mathieu Jaboulay. Early reported neurolysis helped treat vasospastic disorders such as arterial occlusive disease before the introduction of endovascular procedures.
Types
Celiac plexus neurolysis
Celiac plexus neurolysis (CPN) is the chemical ablation of the celiac plexus. This type of neurolysis is mainly used to treat pain associated with advanced pancreatic cancer. Traditional opioid medications used to treat pancreatic cancer patients may yield inadequate pain relief in the most advanced stages of pancreatic cancer, so the goal of CPN is to increase the efficiency of the medication. This in turn may lead to a decreased dosage, thereby decreasing the severity of the side effects. CPN is also used to decrease the chances of a patient developing an addiction for opioid medications due to the large doses commonly used in treatment.
Traditional CPN approaches and nerve blocks
CPN can be performed by percutaneous injection either anterior or posterior to the celiac plexus. CPN is generally performed complementary to nerve blocks, due to the severe pain associated with the injection itself. Neurolysis is commonly performed only after a successful celiac plexus block. CPN and celiac plexus block (CPB) are different in that CPN is permanent ablation whereas CPB is temporal pain inhibition.
There are multiple posterior percutaneous approaches, but no clinical evidence suggests that any one technique is more efficient than the rest. The posterior approaches generally utilize two needles, one at each side of the L1 vertebral body pointing towards the T12 vertebral body.
Increasing the spread of the injection may increase the efficacy of the neurolysis.
Endoscopic ultrasound-guided neurolysis
Endoscopic ultrasound (EUS)-guided neurolysis is a technique that performs neurolysis using a linear-array echoendoscope. The EUS technique is minimally invasive and is believed to be safer than the traditional percutaneous approaches. EUS-guided neurolysis technique can be used to target the celiac plexus, the celiac ganglion, or the broad plexus in the treatment of pancreatic cancer-associated pain.
EUS-guided celiac plexus neurolysis (EUS-CPN) is performed with either an oblique-viewing or forward-viewing echoendoscope and is passed through the mouth into the esophagus. From the gastroesophageal junction, EUS imaging allows the doctor to visualize the aorta, which can then be traced to the origin of the celiac artery. The celiac plexus itself cannot be identified, but is located relative to the celiac artery. The neurolysis is then performed with a spray needle that disperses a neurolytic agent, such as alcohol or phenol, into the celiac plexus.
EUS-CPN can be performed unilaterally (centrally) or bilaterally, however, there is no clinical evidence supporting the superiority of one over the other.
EUS-guided neurolysis can also be performed on the celiac ganglion and the broad plexus in a similar fashion to the EUS-CPN. The celiac ganglion neurolysis (EUS-CGN) is more effective than EUS-CPN and broad plexus neurolysis (EUS-BPN) is more effective than EUS-CGN.
Lumbar sympathetic neurolysis
Lumbar sympathetic neurolysis is typically used on patients with ischemic rest pain, generally associated with nonreconstructable arterial occlusive disease. Although the disease is the basis for this type of neurolysis, other diseases such as peripheral neuralgia or vasospastic disorders can receive lumbar sympathetic neurolysis for pain treatment.
Lumbar sympathetic neurolysis is performed between the L1-L4 vertebrae with separate injections at each vertebra junction. The chemicals used for neurolysis of the nerves cause destructive fibrosis and cause a disruption of the sympathetic ganglia. The vasomotor tone is decreased in the area affected by the neurolysis, which in addition to arteriovenous shunting, create a light pink appearance within the affected area. Lumbar sympathetic neurolysis alters the ischemic rest pain transmission by changing norepinephrine and catecholamine levels or by disturbing afferent fibers. This procedure is mainly used only when other feasible approaches to pain management are unable to be used.
Lumbar sympathetic neurolysis is performed by using absolute alcohol, but other chemicals such as phenol, or other techniques such as radiofrequency or laser ablation have been studied. To aid in the procedure, fluoroscopy or CT guidance is used. Fluoroscopic guidance is the most frequent, giving better real-time monitoring of the needle. The general technique of administering lumbar sympathetic neurolysis involves using three separate needles rather than one because it allows for better longitudinal spread of the chemicals.
Complications can arise from this procedure such as nerve root injury, bleeding, paralysis, and more. Complications have been seen to be diminished when using the aforementioned radiofrequency or laser ablation techniques in comparison to the injection of alcohol or phenol. Generally, approximately two-thirds of patients can expect a favorable outcome (pain relief with minimal complications). Overall, the minimally invasive technique of lumbar sympathetic neurolysis is important in the relief of ischemic rest pain.
Chemodenervation
Chemodenervation is a process used to manage focal muscle overactivity through the use of either phenol, alcohol, or one of the more recently discovered botulinum toxins (BoNTs). Chemodenervation is used as a complement to neurolysis. The agent of choice is injected into the muscle fibers as opposed to nerve tissue and the two work together to dull the neuronal signaling within the muscles.
The effects of alcohol and phenol injections are different from the effects of BoNTs. Neurolysis mediates the effects of alcohol and phenol injections but does not mediate the effects of BoNT injections. Phenol and alcohol are less expensive, faster acting, can treat larger areas, and can be re-administered or boosted in less than three months, however, those injections also require the patient to be sedated, cause muscle scarring, and can lead to muscle fibrosis. BoNT injections are easier to inject, better accepted by patients, and have reversible effects on muscles, however, they are more expensive, act very slowly, and the body can develop a resistance to them.
References
Materials degradation
Nerves
Neurosurgery | Neurolysis | Materials_science,Engineering | 1,737 |
67,373,736 | https://en.wikipedia.org/wiki/Green%20transport%20hierarchy | The green transport hierarchy (Canada), street user hierarchy (US), sustainable transport hierarchy (Wales), urban transport hierarchy or road user hierarchy (Australia, UK) is a hierarchy of modes of passenger transport prioritising green transport. It is a concept used in transport reform groups worldwide and in policy design. In 2020, the UK government consulted about adding to the Highway Code a road user hierarchy prioritising pedestrians. It is a key characteristic of Australian transport planning.
History
The Green Transportation Hierarchy: A Guide for Personal & Public Decision-Making by Chris Bradshaw was first published September 1994 and revised June 2004. As part of a pedestrian advocacy group in the United States, he proposed the hierarchy ranking passenger transport based on environmental emissions. The reviewed ranking listed, in order: walking, cycling, public transport, car sharing, and finally private car.
It was first prepared for Ottawalk and the Transportation Working Committee of the Ottawa-Carleton Round-table on the Environment in January 1992, only stating 'Walk, Cycle, Bus, Truck, Car'.
Factors
Mode
Energy source
Trip length
Trip speed
Vehicle size
Passenger load factor
Trip segment
Trip purpose
Traveller
Adoption
The author directed the hierarchy at both individual lifestyle choices and public authorities who should officially direct their resources – funds, moral suasion, and formal sanctions – based on the factors.
Bradshaw described the hierarchy to be logical, but the effect of applying it to seem radical.
The model rejects the concept of the balanced transportation system, where users are assumed to be free to choose from amongst many different yet ‘equally valid’ modes. This is because choices incorporating factors that are ranked low (walking, cycling, public transport) are seen as generally having a high impact on other choices.
See also
Alternatives to car use
Bicycle-friendly
Bill Boaks campaigned for pedestrian priority everywhere
Car-free movement
Complete streets
Cycling advocacy
Cyclability
Health and environmental impact of transport
Health impact of light rail systems
Induced demand
Jaywalking
Peak car
Planetizen
Priority (right of way)
Reclaim the Streets
Road hierarchy
Road traffic safety
Settlement hierarchy
Street hierarchy
Street reclamation
Sustainable transport
Traffic bottleneck
Traffic code
Traffic conflict
Traffic flow
Transportation demand management
Walkability
Walking audit
References
External links
Original 1992 paper
Climate change policy
Rules of the road
Sustainable transport
1992 documents
1994 books
1992 in transport
Hierarchy | Green transport hierarchy | Physics | 458 |
13,056,718 | https://en.wikipedia.org/wiki/Latamoxef | Latamoxef (or moxalactam) is an oxacephem antibiotic usually grouped with the cephalosporins. In oxacephems such as latamoxef, the sulfur atom of the cephalosporin core is replaced with an oxygen atom.
Latamoxef has been associated with prolonged bleeding time, and several cases of coagulopathy, some fatal, were reported during the 1980s. Latamoxef is no longer available in the United States. As with other cephalosporins with a methylthiotetrazole side chain, latamoxef causes a disulfiram reaction when mixed with alcohol. Additionally, the methylthiotetrazole side chain inhibits γ-carboxylation of glutamic acid; this can interfere with the actions of vitamin K.
It has been described as a third-generation cephalosporin.
Synthesis
Oxa-substituted third generation cephalosporin antibiotic (oxacephalosporin).
The benzhydrol ester of 6-Aminopenicillanic acid (6-APA) is S-chlorinated and treated with base whereupon the intermediate sulfenyl chloride fragments (to 2). Next, displacement with propargyl alcohol in the presence of zinc chloride gives predominanntly the stereochemistry represented by diastereoisomer 3. The side chain is protected as the phenylacetylamide; the triple bond is partially reduced with a 5% Pd-CaCO3 (Lindlar catalyst) and then epoxidized with mCPBA to give 4. The epoxide is opened at the least hindered end with 1-methyl-1H-tetrazole-5-thiol to put in place the future C-3 side chain and give intermediate 5. Jones oxidation followed in turn by ozonolysis (reductive work-up with zinc-AcOH) and reaction with SOCl2 and pyridine give halide 6. The stage is now wet for intramolecular Wittig reaction. Displacement with PPh3 and Wittig olefination gives 1-oxacephem 7. Next a sequence is undertaken of side chain exchange and introduction of a 7-methoxyl group analogous to that which is present in cephamycins and gives them the enhanced beta-lactamase stability. First 7 is converted to the imino chloride with PCl5 and then to the imino methyl ether (with methanol) and next hydrolyzed to the free amine (8). Imine formation with 3,5-di-t-butyl-4-hydroxybenzaldehyde is next carried out leading to 9. Oxidation with nickel(III) oxide gives iminoquinone methide 10, to which methanol is added in a conjugate sense and in the sterechemistry represented by formula 11. The imine is exchanged with Girard's reagent T to give 12, and this is acylated by a suitable protected arylmalonate, as the hemiester hemiacid chloride so as to give 11. Deblocking with aluminium chloride and anisole gives moxalactam 14.
References
Acetaldehyde dehydrogenase inhibitors
Cephalosporin antibiotics
Tetrazoles
Sulfides
Carboxylic acids
Lactams
Ethers
Propionamides
4-Hydroxyphenyl compounds
Oxygen heterocycles | Latamoxef | Chemistry | 735 |
13,214,553 | https://en.wikipedia.org/wiki/Geographic%20number | A geographic number is a telephone number, from a range of numbers in the United Kingdom National Telephone Numbering Plan, where part of its digit structure contains geographic significance used for routing calls to the physical location of the network termination point of the subscriber to whom the telephone number has been assigned, or where the network termination point does not relate to the geographic area code but where the tariffing remains consistent with that geographic area code.
In the Netherlands any telephone number consists of 10 digits and the geographic number is often separated with a minus sign. The number 0592 for example is the geographic number for the area in and around the city Assen, and Groningen uses 050. Someone living in Assen has a caller ID of 6 numbers and someone in Groningen has a caller ID of 7 numbers.
See also
Telephone number
Telephone numbering plan
List of country calling codes
Caller ID
Telephone numbers
Identifiers | Geographic number | Mathematics | 179 |
15,654,857 | https://en.wikipedia.org/wiki/Paradiseo | ParadisEO is an object-oriented framework dedicated to the flexible design of metaheuristics. It uses EO, a template-based, ANSI-C++ compliant computation library. ParadisEO is portable across both Windows system and sequential platforms (Unix, Linux, Mac OS X, etc.). ParadisEO is distributed under the CeCill license and can be used under several environments.
See also
Java Evolutionary Computation Toolkit, a toolkit to implement Evolutionary Algorithms
MOEA Framework, an open source Java framework for multiobjective evolutionary algorithms
References
External links
Official site
Previous official site , at Paradiseo website
Team, at DOLPHIN project-team website
Distributed computing architecture
Metaheuristics
Numerical programming languages
Numerical analysis software for Linux
Numerical analysis software for macOS
Numerical analysis software for Windows | Paradiseo | Technology | 162 |
23,294,634 | https://en.wikipedia.org/wiki/Dixyrazine | Dixyrazine, also known as dixypazin (oxalate), sold under the brand names Ansiolene, Esocalm, Esucos, Metronal, and Roscal, is a typical antipsychotic of the phenothiazine group described as a neuroleptic and antihistamine. It was first introduced in Germany in 1969. It is used as a neuroleptic, anxiolytic, and antihistamine in doses between 12.5 and 75 mg a day.
Synthesis
Sodamide alkylation of phenothiazine (1) with 1-bromo-3-chloro-2-methylpropane [6974-77-2] (2) gives 10-(3-Chloro-2-methylpropyl)phenothiazine, CID:12299119 (3). Completion of the sidechain by alkylation with 1-[2-(2-Hydroxyethoxy) Ethyl]Piperazine [13349-82-1] (4) and displacement of the halogen completes the synthesis of Dixyrazine (5).
References
Primary alcohols
Dopamine antagonists
Ethers
H1 receptor antagonists
Phenothiazines
Piperazines
Typical antipsychotics
Ethanolamines | Dixyrazine | Chemistry | 285 |
1,045,467 | https://en.wikipedia.org/wiki/Travelers%27%20diarrhea | Travelers' diarrhea (TD) is a stomach and intestinal infection. TD is defined as the passage of unformed stool (one or more by some definitions, three or more by others) while traveling. It may be accompanied by abdominal cramps, nausea, fever, headache and bloating. Occasionally dysentery may occur. Most travelers recover within three to four days with little or no treatment. About 12% of people may have symptoms for a week.
Bacteria are responsible for more than half of cases, typically via foodborne illness and waterborne diseases. The bacteria enterotoxigenic Escherichia coli (ETEC) are typically the most common except in Southeast Asia, where Campylobacter is more prominent. About 10 to 20 percent of cases are due to norovirus. Protozoa such as Giardia may cause longer term disease. The risk is greatest in the first two weeks of travel and among young adults. People affected are more often from the developed world.
Recommendations for prevention include eating only properly cleaned and cooked food, drinking bottled water, and frequent hand washing. The oral cholera vaccine, while effective for cholera, is of questionable use for travelers' diarrhea. Preventive antibiotics are generally discouraged. Primary treatment includes rehydration and replacing lost salts (oral rehydration therapy). Antibiotics are recommended for significant or persistent symptoms, and can be taken with loperamide to decrease diarrhea. Hospitalization is required in less than 3 percent of cases.
Estimates of the percentage of people affected range from 20 to 50 percent among travelers to the developing world. TD is particularly common among people traveling to Asia (except for Japan and Singapore), the Middle East, Africa, Latin America, and Central and South America. The risk is moderate in Southern Europe, and Russia. TD has been linked to later irritable bowel syndrome and Guillain–Barré syndrome. It has colloquially been known by a number of names, including "Montezuma's revenge," “mummy tummy” and "Delhi belly".
Signs and symptoms
The onset of TD usually occurs within the first week of travel, but may occur at any time while traveling, and even after returning home, depending on the incubation period of the infectious agent. Bacterial TD typically begins abruptly, but Cryptosporidium may incubate for seven days, and Giardia for 14 days or more, before symptoms develop. Typically, a traveler experiences four to five loose or watery bowel movements each day. Other commonly associated symptoms are abdominal cramping, bloating, fever, and malaise. Appetite may decrease significantly. Though unpleasant, most cases of TD are mild, and resolve in a few days without medical intervention.
Blood or mucus in the diarrhea, significant abdominal pain, or high fever suggests a more serious cause, such as cholera, characterized by a rapid onset of weakness and torrents of watery diarrhea with flecks of mucus (described as "rice water" stools). Medical care should be sought in such cases; dehydration is a serious consequence of cholera, and may trigger serious sequelae—including, in rare instances, death—as rapidly as 24 hours after onset if not addressed promptly.
Causes
Infectious agents are the primary cause of travelers' diarrhea. Bacterial enteropathogens cause about 80% of cases. Viruses and protozoans account for most of the rest.
The most common causative agent isolated in countries surveyed has been enterotoxigenic Escherichia coli (ETEC). Enteroaggregative E. coli is increasingly recognized. Shigella spp. and Salmonella spp. are other common bacterial pathogens. Campylobacter, Yersinia, Aeromonas, and Plesiomonas spp. are less frequently found. Mechanisms of action vary: some bacteria release toxins which bind to the intestinal wall and cause diarrhea; others damage the intestines themselves by their direct presence.
Brachyspira pilosicoli pathogen also appears to be responsible for many chronic intermittent watery diarrhea and is only diagnosed through colonic biopsies and microscopic discovery of a false brush border on H&E or Warthin silver stain: its brush-border is stronger and longer that Brachyspira aalborgi's brush-border. It is unfortunately often not diagnosed as coproculture does not allow growth and 16S PCR panel primers do not match Brachyspira sequences.
While viruses are associated with less than 20% of adult cases of travelers' diarrhea, they may be responsible for nearly 70% of cases in infants and children. Diarrhea due to viral agents is unaffected by antibiotic therapy, but is usually self-limited. Protozoans such as Giardia lamblia, Cryptosporidium and Cyclospora cayetanensis can also cause diarrhea. Pathogens commonly implicated in travelers' diarrhea appear in the table in this section.
A subtype of travelers' diarrhea afflicting hikers and campers, sometimes known as wilderness diarrhea, may have a somewhat different frequency of distribution of pathogens.
Risk factors
The primary source of infection is ingestion of fecally contaminated food or water. Attack rates are similar for men and women.
The most important determinant of risk is the traveler's destination. High-risk destinations include developing countries in Latin America, Africa, the Middle East, and Asia. Among backpackers, additional risk factors include drinking untreated surface water and failure to maintain personal hygiene practices and clean cookware. Campsites often have very primitive (if any) sanitation facilities, making them potentially as dangerous as any developing country.
Although travelers' diarrhea usually resolves within three to five days (mean duration: 3.6 days), in about 20% of cases, the illness is severe enough to require bedrest, and in 10%, the illness duration exceeds one week. For those prone to serious infections, such as bacillary dysentery, amoebic dysentery, and cholera, TD can occasionally be life-threatening. Others at higher-than-average risk include young adults, immunosuppressed persons, persons with inflammatory bowel disease or diabetes, and those taking H2 blockers or antacids.
Immunity
Travelers often get diarrhea from eating and drinking foods and beverages that have no adverse effects on local residents. This is due to immunity that develops with constant, repeated exposure to pathogenic organisms. The extent and duration of exposure necessary to acquire immunity has not been determined; it may vary with each individual organism. A study among expatriates in Nepal suggests that immunity may take up to seven years to develop—presumably in adults who avoid deliberate pathogen exposure.
Conversely, immunity acquired by American students while living in Mexico disappeared, in one study, as quickly as eight weeks after cessation of exposure.
Prevention
Sanitation
Recommendations include avoidance of questionable foods and drinks, on the assumption that TD is fundamentally a sanitation failure, leading to bacterial contamination of drinking water and food. While the effectiveness of this strategy has been questioned, given that travelers have little or no control over sanitation in hotels and restaurants, and little evidence supports the contention that food vigilance reduces the risk of contracting TD, guidelines continue to recommend basic, common-sense precautions when making food and beverage choices:
Maintain good hygiene and use only safe water for drinking and brushing teeth.
Safe beverages include bottled water, bottled carbonated beverages, and water boiled or appropriately treated by the traveler (as described below). Caution should be exercised with tea, coffee, and other hot beverages that may be only heated, not boiled.
In restaurants, insist that bottled water be unsealed in your presence; reports of locals filling empty bottles with untreated tap water and reselling them as purified water have surfaced. When in doubt, a bottled carbonated beverage is the safest choice, since it is difficult to simulate carbonation when refilling a used bottle.
Avoid ice, which may not have been made with safe water.
Avoid green salads, because the lettuce and other uncooked ingredients are unlikely to have been washed with safe water.
Avoid eating raw fruits and vegetables unless cleaned and peeled personally.
If handled properly, thoroughly cooked fresh and packaged foods are usually safe. Raw or undercooked meat and seafood should be avoided. Unpasteurized milk, dairy products, mayonnaise, and pastry icing are associated with increased risk for TD, as are foods and beverages purchased from street vendors and other establishments where unhygienic conditions may be present.
Water
Although safe bottled water is now widely available in most remote destinations, travelers can treat their own water if necessary, or as an extra precaution.
Techniques include boiling, filtering, chemical treatment, and ultraviolet light; boiling is by far the most effective of these methods. Boiling rapidly kills all active bacteria, viruses, and protozoa. Prolonged boiling is usually unnecessary; most microorganisms are killed within seconds at water temperature above . The second-most effective method is to combine filtration and chemical disinfection. Filters eliminate most bacteria and protozoa, but not viruses. Chemical treatment with halogens—chlorine bleach, tincture of iodine, or commercial tablets—have low-to-moderate effectiveness against protozoa such as Giardia, but work well against bacteria and viruses.
UV light is effective against both viruses and cellular organisms, but only works in clear water, and it is ineffective unless manufacturer's instructions are carefully followed for maximum water depth/distance from UV source, and for dose/exposure time. Other claimed advantages include short treatment time, elimination of the need for boiling, no taste alteration, and decreased long-term cost compared with bottled water. The effectiveness of UV devices is reduced when water is muddy or turbid; as UV is a type of light, any suspended particles create shadows that hide microorganisms from UV exposure.
Medications
Bismuth subsalicylate four times daily reduces rates of travelers' diarrhea. Though many travelers find a four-times-per-day regimen inconvenient, lower doses are not effective. Potential side effects include black tongue, black stools, nausea, constipation, and ringing in the ears. Bismuth subsalicylate should not be taken by those with aspirin allergy, kidney disease, or gout, nor concurrently with certain antibiotics such as the quinolones, and should not be taken continuously for more than three weeks. Some countries do not recommend it due to the risk of rare but serious side effects.
A hyperimmune bovine colostrum to be taken by mouth is marketed in Australia for prevention of ETEC-induced TD. As yet, no studies show efficacy under actual travel conditions.
Though effective, antibiotics are not recommended for the prevention of TD in most situations because of the risk of allergy or adverse reactions to the antibiotics, and because intake of preventive antibiotics may decrease the effectiveness of such drugs should a serious infection develop subsequently. Antibiotics can also cause vaginal yeast infections, or overgrowth of the bacterium Clostridioides difficile, leading to pseudomembranous colitis and its associated severe, unrelenting diarrhea.
Antibiotics may be warranted in special situations where benefits outweigh the above risks, such as immunocompromised travelers, chronic intestinal disorders, prior history of repeated disabling bouts of TD, or scenarios in which the onset of diarrhea might prove particularly troublesome. Options for prophylactic treatment include the fluoroquinolone antibiotics (such as ciprofloxacin), azithromycin, and trimethoprim/sulfamethoxazole, though the latter has proved less effective in recent years. Rifaximin may also be useful. Quinolone antibiotics may bind to metallic cations such as bismuth, and should not be taken concurrently with bismuth subsalicylate. Trimethoprim/sulfamethoxazole should not be taken by anyone with a history of sulfa allergy.
Vaccination
The oral cholera vaccine, while effective for prevention of cholera, is of questionable use for prevention of TD. A 2008 review found tentative evidence of benefit. A 2015 review stated it may be reasonable for those at high risk of complications from TD. Several vaccine candidates targeting ETEC or Shigella are in various stages of development.
Probiotics
One 2007 review found that probiotics may be safe and effective for prevention of TD, while another review found no benefit. A 2009 review confirmed that more study is needed, as the evidence to date is mixed.
Treatment
Most cases of TD are mild and resolve in a few days without treatment, but severe or protracted cases may result in significant fluid loss and dangerous electrolytic imbalance. Dehydration due to diarrhea can also alter the effectiveness of medicinal and contraceptive drugs. Adequate fluid intake (oral rehydration therapy) is therefore a high priority. Commercial rehydration drinks are widely available; alternatively, purified water or other clear liquids are recommended, along with salty crackers or oral rehydration salts (available in stores and pharmacies in most countries) to replenish lost electrolytes. Carbonated water or soda, left open to allow dissipation of the carbonation, is useful when nothing else is available. In severe or protracted cases, the oversight of a medical professional is advised.
Antibiotics
If diarrhea becomes severe (typically defined as three or more loose stools in an eight-hour period), especially if associated with nausea, vomiting, abdominal cramps, fever, or blood in stools, medical treatment should be sought. Such patients may benefit from antimicrobial therapy. A 2000 literature review found that antibiotic treatment shortens the duration and severity of TD; most reported side effects were minor, or resolved on stopping the antibiotic.
The antibiotic recommended varies based upon the destination of travel. Trimethoprim–sulfamethoxazole and doxycycline are no longer recommended because of high levels of resistance to these agents. Antibiotics are typically given for three to five days, but single doses of azithromycin or levofloxacin have been used. Rifaximin and rifamycin are approved in the U.S. for treatment of TD caused by ETEC. If diarrhea persists despite therapy, travelers should be evaluated for bacterial strains resistant to the prescribed antibiotic, possible viral or parasitic infections, bacterial or amoebic dysentery, Giardia, helminths, or cholera.
Antimotility agents
Antimotility drugs such as loperamide and diphenoxylate reduce the symptoms of diarrhea by slowing transit time in the gut. They may be taken to slow the frequency of stools, but not enough to stop bowel movements completely, which delays expulsion of the causative organisms from the intestines. They should be avoided in patients with fever, bloody diarrhea, and possible inflammatory diarrhea. Adverse reactions may include nausea, vomiting, abdominal pain, hives or rash, and loss of appetite. Antimotility agents should not, as a rule, be taken by children under age two.
Epidemiology
An estimated 10 million people—20 to 50% of international travelers—develop TD each year. It is more common in the developing world, where rates exceed 60%, but has been reported in some form in virtually every travel destination in the world.
Society and culture
Moctezuma's revenge is a colloquial term for travelers' diarrhea contracted in Mexico. The name refers to Moctezuma II (1466–1520), the Tlatoani (ruler) of the Aztec civilization who was overthrown by the Spanish conquistador Hernán Cortés in the early 16th century, thereby bringing large portions of what is now Mexico and Central America under the rule of the Spanish crown. The relevance being that Cortés and his soldiers carried the smallpox virus, which Mexicans had never been exposed to. The resulting infection reduced the population of Tenochtitlan by 40 percent in 1520 alone.
Wilderness diarrhea
Wilderness diarrhea, also called wilderness-acquired diarrhea (WAD) or backcountry diarrhea, refers to diarrhea among backpackers, hikers, campers and other outdoor recreationalists in wilderness or backcountry situations, either at home or abroad. It is caused by the same fecal microorganisms as other forms of travelers' diarrhea, usually bacterial or viral. Since wilderness campsites seldom provide access to sanitation facilities, the infection risk is similar to that of any developing country. Water treatment, good hygiene, and dish washing have all been shown to reduce the incidence of WAD.
References
External links
Diarrhea
Foodborne illnesses
Waterborne diseases
Infectious diseases
Tourism
Conditions diagnosed by stool test
Wikipedia medicine articles ready to translate
Wikipedia emergency medicine articles ready to translate
Travel | Travelers' diarrhea | Physics | 3,554 |
58,581,625 | https://en.wikipedia.org/wiki/KNDy%20neuron | Kisspeptin, neurokinin B, and dynorphin (KNDy) neurons are neurons in the hypothalamus of the brain that are central to the hormonal control of reproduction.
KNDy neurons in the hypothalamus coexpress kisspeptin, neurokinin B (NKB) and dynorphin. They are involved in the negative feedback of gonadotropin-releasing hormone (GnRH) release in the hypothalamic–pituitary–gonadal (HPG) axis. Sex steroids released from the gonads act on KNDy neurons as inhibitors of kisspeptin release. This inhibition provides negative feedback control on the HPG axis.
KNDy peptide colocalization was first discovered in 2007 in sheep and was later confirmed to be present in mice, rats, cows and nonhuman primates. KNDy neurons are thought to be located in the hypothalamus region of human brains due to conservation across most mammalian species.
Other roles of KNDy neurons include influences on prolactin production; puberty; stress' effects on reproduction; and the control of thermoregulation.
GnRH pulse regulation
KNDy neurons control GnRH pulse generation through the release of three known peptides: neurokinin B (NKB), dynorphin and kisspeptin. NKB and dynorphin are the two peptides that regulate the secretion of kisspeptin. NKB is the stimulating peptide that initiates the pulsatile release of GnRH by activating NKB receptors, called TACR3, on mutually connected KNDy neurons to release kisspeptin in an autocrine signalling pathway. Kisspeptin then activates the GPR54 receptors on GnRH neurons inducing the pulsatile release of GnRH and on KNDy neurons, adding to the stimulatory effect of NKB. Eventually the pulse is terminated by dynorphin, which acts on κ-opioid receptors (KOR) in KNDy neurons to inhibit NKB and kisspeptin secretion and inhibits GnRH secretion acting directly on GnRH neuron receptors.
Sexual dimorphism in KNDy neuron populations
KNDy neurons are most densely located in the arcuate nucleus (ARC) of the hypothalamus, but also exist in the rostral periventricular area of third ventricle (RP3V) and the preoptic area (POA). Expression of the KNDy peptides highlighted has been shown to differentiate between species, sexes, and with fluctuating steroidal hormonal levels. Improvements in immunohistochemistry and deep-brain imaging techniques have revealed information about KNDy cell populations and sexual dimorphism. Larger populations appear in the female ARC than in the male ARC. The RP3V is composed of the anteroventral periventricular nucleus (APVN) and the preoptic periventricular nucleus, where KNDy neurons are sexually dimorphic. KNDy populations and sexual dimorphism appear in most species studied, including humans, but presence in the RP3V is primarily associated with rodents, with similar sexual dimorphism.
Steroid hormone feedback
Negative feedback of steroid hormones in both males and females controls the pulsatile nature of GnRH secretion, subsequently increasing or decreasing the release of LH and FSH from the anterior pituitary. This is mediated by estrogen receptor α (ERα), expressed on KNDy neurons. The binding of estrogen or testosterone to this receptor in the ARC region inhibits KNDy neurons and therefore prevents GnRH release. KNDy neurons are involved in positive feedback of the HPG axis. This mechanism is best exemplified by the LH surge in the female reproductive cycle, where the increase of estrogen from the growing ovarian follicle causes positive feedback in the AVPV region, and subsequently a rise in LH from the pituitary.
References
Endocrine system
Human female endocrine system
Hypothalamus
Limbic system
Neuroendocrinology
Neurons | KNDy neuron | Biology | 893 |
32,925,712 | https://en.wikipedia.org/wiki/Ammonium%20nonanoate | Ammonium nonanoate is a nonsystemic, broad-spectrum contact herbicide that has no soil activity. It can be used for the suppression and control of weeds, including grasses, vines, underbrush, and annual/perennial plants, including moss, saplings, and tree suckers. Ammonium nonanoate is marketed as an aqueous solutions, at room temperature at its maximum concentration in water (40%). Solutions are colorless to pale yellow liquid with a slight fatty acid odor. It is stable in storage. Ammonium nonanoate exists as white crystals.
Ammonium nonanoate is made from ammonia and nonanoic acid, a carboxylic acid widely distributed in nature, mainly as derivatives (esters) in such foods as apples, grapes, cheese, milk, rice, beans, oranges, and potatoes and in many other nonfood sources.
References
Herbicides
Ammonium compounds | Ammonium nonanoate | Chemistry,Biology | 191 |
7,656,821 | https://en.wikipedia.org/wiki/Dan-Virgil%20Voiculescu | Dan-Virgil Voiculescu (born 14 June 1949) is a Romanian professor of mathematics at the University of California, Berkeley. He has worked in single operator theory, operator K-theory and von Neumann algebras. More recently, he developed free probability theory.
Education and career
Voiculescu studied at the University of Bucharest, receiving his PhD in 1977 under the direction of Ciprian Foias. He was an assistant at the University of Bucharest (1972–1973), a researcher at the Institute of Mathematics of the Romanian Academy (1973–1975), and a researcher at INCREST (1975–1986). He came to Berkeley in 1986 for the International Congress of Mathematicians, and stayed on as visiting professor. Voiculescu was appointed professor at Berkeley in 1987.
Awards and honors
He received the 2004 NAS Award in Mathematics from the National Academy of Sciences (NAS) for “the theory of free probability, in particular, using random matrices and a new concept of entropy to solve several hitherto intractable problems in von Neumann algebras.”
Voiculescu was elected to the National Academy of Sciences in 2006. In 2012 he became a fellow of the American Mathematical Society.
References
External links
Berkeley page
Notes on Free probability aspects of random matrices
Dan-Virgil Voiculescu: visionary operator algebraist and creator of free probability theory
Romanian emigrants to the United States
Members of the United States National Academy of Sciences
20th-century Romanian mathematicians
20th-century American mathematicians
21st-century American mathematicians
Mathematical analysts
Probability theorists
University of California, Berkeley faculty
Romanian academics
University of Bucharest alumni
Scientists from Bucharest
Fellows of the American Mathematical Society
Living people
1949 births
21st-century Romanian mathematicians | Dan-Virgil Voiculescu | Mathematics | 341 |
68,807,012 | https://en.wikipedia.org/wiki/Protein%20aggregation%20predictors | Computational methods that use protein sequence and/ or protein structure to predict protein aggregation. The table below, shows the main features of software for prediction of protein aggregation
Table
See also
PhasAGE toolbox
Amyloid
Protein aggregation
References
Protein structure
Structural bioinformatics software
Proteomics
Neurodegenerative disorders | Protein aggregation predictors | Chemistry | 63 |
945,957 | https://en.wikipedia.org/wiki/New%20Foundations | In mathematical logic, New Foundations (NF) is a non-well-founded, finitely axiomatizable set theory conceived by Willard Van Orman Quine as a simplification of the theory of types of Principia Mathematica.
Definition
The well-formed formulas of NF are the standard formulas of propositional calculus with two primitive predicates equality () and membership (). NF can be presented with only two axiom schemata:
Extensionality: Two objects with the same elements are the same object; formally, given any set A and any set B, if for every set X, X is a member of A if and only if X is a member of B, then A is equal to B.
A restricted axiom schema of comprehension: exists for each stratified formula .
A formula is said to be stratified if there exists a function f from pieces of 's syntax to the natural numbers, such that for any atomic subformula of we have f(y) = f(x) + 1, while for any atomic subformula of , we have f(x) = f(y).
Finite axiomatization
NF can be finitely axiomatized. One advantage of such a finite axiomatization is that it eliminates the notion of stratification. The axioms in a finite axiomatization correspond to natural basic constructions, whereas stratified comprehension is powerful but not necessarily intuitive. In his introductory book, Holmes opted to take the finite axiomatization as basic, and prove stratified comprehension as a theorem. The precise set of axioms can vary, but includes most of the following, with the others provable as theorems:
Extensionality: If and are sets, and for each object , is an element of if and only if is an element of , then . This can also be viewed as defining the equality symbol.
Singleton: For every object , the set exists, and is called the singleton of .
Cartesian Product: For any sets , , the set , called the Cartesian product of and , exists. This can be restricted to the existence of one of the cross products or .
Converse: For each relation , the set exists; observe that exactly if .
Singleton Image: For any relation , the set , called the singleton image of , exists.
Domain: If is a relation, the set , called the domain of , exists. This can be defined using the operation of type lowering.
Inclusion: The set exists. Equivalently, we may consider the set .
Complement: For each set , the set , called the complement of , exists.
(Boolean) Union: If and are sets, the set , called the (Boolean) union of and , exists.
Universal Set: exists. It is straightforward that for any set , .
Ordered Pair: For each , , the ordered pair of and , , exists; exactly if and . This and larger tuples can be a definition rather than an axiom if an ordered pair construction is used.
Projections: The sets and exist (these are the relations which an ordered pair has to its first and second terms, which are technically referred to as its projections).
Diagonal: The set exists, called the equality relation.
Set Union: If is a set all of whose elements are sets, the set , called the (set) union of , exists.
Relative Product: If , are relations, the set , called the relative product of and , exists.
Anti-intersection: exists. This is equivalent to complement and union together, with and .
Cardinal one: The set of all singletons, , exists.
Tuple Insertions: For a relation , the sets and exist.
Type lowering: For any set , the set exists.
Typed Set Theory
New Foundations is closely related to Russellian unramified typed set theory (TST), a streamlined version of the theory of types of Principia Mathematica with a linear hierarchy of types. In this many-sorted theory, each variable and set is assigned a type. It is customary to write the type indices as superscripts: denotes a variable of type n. Type 0 consists of individuals otherwise undescribed. For each (meta-) natural number n, type n+1 objects are sets of type n objects; objects connected by identity have equal types and sets of type n have members of type n-1. The axioms of TST are extensionality, on sets of the same (positive) type, and comprehension, namely that if is a formula, then the set exists. In other words, given any formula , the formula is an axiom where represents the set and is not free in . This type theory is much less complicated than the one first set out in the Principia Mathematica, which included types for relations whose arguments were not necessarily all of the same types.
There is a correspondence between New Foundations and TST in terms of adding or erasing type annotations. In NF's comprehension schema, a formula is stratified exactly when the formula can be assigned types according to the rules of TST. This can be extended to map every NF formula to a set of corresponding TST formulas with various type index annotations. The mapping is one-to-many because TST has many similar formulas. For example, raising every type index in a TST formula by 1 results in a new, valid TST formula.
Tangled Type Theory
Tangled Type Theory (TTT) is an extension of TST where each variable is typed by an ordinal rather than a natural number. The well-formed atomic formulas are and where . The axioms of TTT are those of TST where each variable of type is mapped to a variable where is an increasing function.
TTT is considered a "weird" theory because each type is related to each lower type in the same way. For example, type 2 sets have both type 1 members and type 0 members, and extensionality axioms assert that a type 2 set is determined uniquely by either its type 1 members or its type 0 members. Whereas TST has natural models where each type is the power set of type , in TTT each type is being interpreted as the power set of each lower type simultaneously. Regardless, a model of NF can be easily converted to a model of TTT, because in NF all the types are already one and the same. Conversely, with a more complicated argument, it can also be shown that the consistency of TTT implies the consistency of NF.
NFU and other variants
NF with urelements (NFU) is an important variant of NF due to Jensen and clarified by Holmes. Urelements are objects that are not sets and do not contain any elements, but can be contained in sets. One of the simplest forms of axiomatization of NFU regards urelements as multiple, unequal empty sets, thus weakening the extensionality axiom of NF to:
Weak extensionality: Two non-empty objects with the same elements are the same object; formally,
In this axiomatization, the comprehension schema is unchanged, although the set will not be unique if it is empty (i.e. if is unsatisfiable).
However, for ease of use, it is more convenient to have a unique, "canonical" empty set. This can be done by introducing a sethood predicate to distinguish sets from atoms. The axioms are then:
Sets: Only sets have members, i.e.
Extensionality: Two sets with the same elements are the same set, i.e.
Comprehension: The set exists for each stratified formula , i.e.
NF3 is the fragment of NF with full extensionality (no urelements) and those instances of comprehension which can be stratified using at most three types. NF4 is the same theory as NF.
Mathematical Logic (ML) is an extension of NF that includes proper classes as well as sets. ML was proposed by Quine and revised by Hao Wang, who proved that NF and the revised ML are equiconsistent.
Constructions
This section discusses some problematic constructions in NF. For a further development of mathematics in NFU, with a comparison to the development of the same in ZFC, see implementation of mathematics in set theory.
Ordered pairs
Relations and functions are defined in TST (and in NF and NFU) as sets of ordered pairs in the usual way. For purposes of stratification, it is desirable that a relation or function is merely one type higher than the type of the members of its field. This requires defining the ordered pair so that its type is the same as that of its arguments (resulting in a type-level ordered pair). The usual definition of the ordered pair, namely , results in a type two higher than the type of its arguments a and b. Hence for purposes of determining stratification, a function is three types higher than the members of its field. NF and related theories usually employ Quine's set-theoretic definition of the ordered pair, which yields a type-level ordered pair. However, Quine's definition relies on set operations on each of the elements a and b, and therefore does not directly work in NFU.
As an alternative approach, Holmes takes the ordered pair (a, b) as a primitive notion, as well as its left and right projections and , i.e., functions such that and (in Holmes' axiomatization of NFU, the comprehension schema that asserts the existence of for any stratified formula is considered a theorem and only proved later, so expressions like are not considered proper definitions). Fortunately, whether the ordered pair is type-level by definition or by assumption (i.e., taken as primitive) usually does not matter.
Natural numbers and the axiom of infinity
The usual form of the axiom of infinity is based on the von Neumann construction of the natural numbers, which is not suitable for NF, since the description of the successor operation (and many other aspects of von Neumann numerals) is necessarily unstratified. The usual form of natural numbers used in NF follows Frege's definition, i.e., the natural number n is represented by the set of all sets with n elements. Under this definition, 0 is easily defined as , and the successor operation can be defined in a stratified way: Under this definition, one can write down a statement analogous to the usual form of the axiom of infinity. However, that statement would be trivially true, since the universal set would be an inductive set.
Since inductive sets always exist, the set of natural numbers can be defined as the intersection of all inductive sets. This definition enables mathematical induction for stratified statements , because the set can be constructed, and when satisfies the conditions for mathematical induction, this set is an inductive set.
Finite sets can then be defined as sets that belong to a natural number. However, it is not trivial to prove that is not a "finite set", i.e., that the size of the universe is not a natural number. Suppose that . Then (it can be shown inductively that a finite set is not equinumerous with any of its proper subsets), , and each subsequent natural number would be too, causing arithmetic to break down. To prevent this, one can introduce the axiom of infinity for NF:
It may intuitively seem that one should be able to prove Infinity in NF(U) by constructing any "externally" infinite sequence of sets, such as . However, such a sequence could only be constructed through unstratified constructions (evidenced by the fact that TST itself has finite models), so such a proof could not be carried out in NF(U). In fact, Infinity is logically independent of NFU: There exists models of NFU where is a non-standard natural number. In such models, mathematical induction can prove statements about , making it impossible to "distinguish" from standard natural numbers.
However, there are some cases where Infinity can be proven (in which cases it may be referred to as the theorem of infinity):
In NF (without urelements), Specker has shown that the axiom of choice is false. Since it can be proved through induction that every finite set has a choice function (a stratified condition), it follows that is infinite.
In NFU with axioms asserting the existence of a type-level ordered pair, is equinumerous with its proper subset , which implies Infinity. Conversely, NFU + Infinity + Choice proves the existence of a type-level ordered pair. NFU + Infinity interprets NFU + "there is a type-level ordered pair" (they are not quite the same theory, but the differences are inessential).
Stronger axioms of infinity exist, such as that the set of natural numbers is a strongly Cantorian set, or NFUM = NFU + Infinity + Large Ordinals + Small Ordinals which is equivalent to Morse–Kelley set theory plus a predicate on proper classes which is a κ-complete nonprincipal ultrafilter on the proper class ordinal κ.
Large sets
NF (and NFU + Infinity + Choice, described below and known consistent) allow the construction of two kinds of sets that ZFC and its proper extensions disallow because they are "too large" (some set theories admit these entities under the heading of proper classes):
The universal set V. Because is a stratified formula, the universal set V = {x | x=x} exists by Comprehension. An immediate consequence is that all sets have complements, and the entire set-theoretic universe under NF has a Boolean structure.
Cardinal and ordinal numbers. In NF (and TST), the set of all sets having n elements (the circularity here is only apparent) exists. Hence Frege's definition of the cardinal numbers works in NF and NFU: a cardinal number is an equivalence class of sets under the relation of equinumerosity: the sets A and B are equinumerous if there exists a bijection between them, in which case we write . Likewise, an ordinal number is an equivalence class of well-ordered sets.
Cartesian closure
The category whose objects are the sets of NF and whose arrows are the functions between those sets is not Cartesian closed; Since NF lacks Cartesian closure, not every function curries as one might intuitively expect, and NF is not a topos.
Resolution of set-theoretic paradoxes
NF may seem to run afoul of problems similar to those in naive set theory, but this is not the case. For example, the existence of the impossible Russell class is not an axiom of NF, because cannot be stratified. NF steers clear of the three well-known paradoxes of set theory in drastically different ways than how those paradoxes are resolved in well-founded set theories such as ZFC. Many useful concepts that are unique to NF and its variants can be developed from the resolution of those paradoxes.
Russell's paradox
The resolution of Russell's paradox is trivial: is not a stratified formula, so the existence of is not asserted by any instance of Comprehension. Quine said that he constructed NF with this paradox uppermost in mind.
Cantor's paradox and Cantorian sets
Cantor's paradox boils down to the question of whether there exists a largest cardinal number, or equivalently, whether there exists a set with the largest cardinality. In NF, the universal set is obviously a set with the largest cardinality. However, Cantor's theorem says (given ZFC) that the power set of any set is larger than (there can be no injection (one-to-one map) from into ), which seems to imply a contradiction when .
Of course there is an injection from into since is the universal set, so it must be that Cantor's theorem (in its original form) does not hold in NF. Indeed, the proof of Cantor's theorem uses the diagonalization argument by considering the set . In NF, and should be assigned the same type, so the definition of is not stratified. Indeed, if is the trivial injection , then is the same (ill-defined) set in Russell's paradox.
This failure is not surprising since makes no sense in TST: the type of is one higher than the type of . In NF, is a syntactical sentence due to the conflation of all the types, but any general proof involving Comprehension is unlikely to work.
The usual way to correct such a type problem is to replace with , the set of one-element subsets of . Indeed, the correctly typed version of Cantor's theorem is a theorem in TST (thanks to the diagonalization argument), and thus also a theorem in NF. In particular, : there are fewer one-element sets than sets (and so fewer one-element sets than general objects, if we are in NFU). The "obvious" bijection from the universe to the one-element sets is not a set; it is not a set because its definition is unstratified. Note that in all models of NFU + Choice it is the case that ; Choice allows one not only to prove that there are urelements but that there are many cardinals between and .
However, unlike in TST, is a syntactical sentence in NF(U), and as shown above one can talk about its truth value for specific values of (e.g. when it is false). A set which satisfies the intuitively appealing is said to be Cantorian: a Cantorian set satisfies the usual form of Cantor's theorem. A set which satisfies the further condition that , the restriction of the singleton map to A, is a set is not only Cantorian set but strongly Cantorian.
Burali-Forti paradox and the T operation
The Burali-Forti paradox of the largest ordinal number is resolved in the opposite way: In NF, having access to the set of ordinals does not allow one to construct a "largest ordinal number". One can construct the ordinal that corresponds to the natural well-ordering of all ordinals, but that does not mean that is larger than all those ordinals.
To formalize the Burali-Forti paradox in NF, it is necessary to first formalize the concept of ordinal numbers. In NF, ordinals are defined (in the same way as in naive set theory) as equivalence classes of well-orderings under isomorphism. This is a stratified definition, so the set of ordinals can be defined with no problem. Transfinite induction works on stratified statements, which allows one to prove that the natural ordering of ordinals ( iff there exists well-orderings such that is a continuation of ) is a well-ordering of . By definition of ordinals, this well-ordering also belongs to an ordinal . In naive set theory, one would go on to prove by transfinite induction that each ordinal is the order type of the natural order on the ordinals less than , which would imply an contradiction since by definition is the order type of all ordinals, not any proper initial segment of them.
However, the statement " is the order type of the natural order on the ordinals less than " is not stratified, so the transfinite induction argument does not work in NF. In fact, "the order type of the natural order on the ordinals less than " is at least two types higher than : The order relation is one type higher than assuming that is a type-level ordered pair, and the order type (equivalence class) is one type higher than . If is the usual Kuratowski ordered pair (two types higher than and ), then would be four types higher than .
To correct such a type problem, one needs the T operation, , that "raises the type" of an ordinal , just like how "raises the type" of the set . The T operation is defined as follows: If , then is the order type of the order . Now the lemma on order types may be restated in a stratified manner:
The order type of the natural order on the ordinals is or , depending on which ordered pair is used.
Both versions of this statement can be proven by transfinite induction; we assume the type level pair hereinafter. This means that is always less than , the order type of all ordinals. In particular, .
Another (stratified) statement that can be proven by transfinite induction is that T is a strictly monotone (order-preserving) operation on the ordinals, i.e., iff . Hence the T operation is not a function: The collection of ordinals cannot have a least member, and thus cannot be a set. More concretely, the monotonicity of T implies , a "descending sequence" in the ordinals which also cannot be a set.
One might assert that this result shows that no model of NF(U) is "standard", since the ordinals in any model of NFU are externally not well-ordered. This is a philosophical question, not a question of what can be proved within the formal theory. Note that even within NFU it can be proven that any set model of NFU has non-well-ordered "ordinals"; NFU does not conclude that the universe is a model of NFU, despite being a set, because the membership relation is not a set relation.
Consistency
Some mathematicians have questioned the consistency of NF, partly because it is not clear why it avoids the known paradoxes. A key issue was that Specker proved NF combined with the Axiom of Choice is inconsistent. The proof is complex and involves T-operations. However, since 2010, Holmes has claimed to have shown that NF is consistent relative to the consistency of standard set theory (ZFC). In 2024, Sky Wilshaw confirmed Holmes' proof using the Lean proof assistant.
Although NFU resolves the paradoxes similarly to NF, it has a much simpler consistency proof. The proof can be formalized within Peano Arithmetic (PA), a theory weaker than ZF that most mathematicians accept without question. This does not conflict with Gödel's second incompleteness theorem because NFU does not include the Axiom of Infinity and therefore PA cannot be modeled in NFU, avoiding a contradiction. PA also proves that NFU with Infinity and NFU with both Infinity and Choice are equiconsistent with TST with Infinity and TST with both Infinity and Choice, respectively. Therefore, a stronger theory like ZFC, which proves the consistency of TST, will also prove the consistency of NFU with these additions. In simpler terms, NFU is generally seen as weaker than NF because, in NFU, the collection of all sets (the power set of the universe) can be smaller than the universe itself, especially when urelements are included, as required by NFU with Choice.
Models of NFU
Jensen's proof gives a fairly simple method for producing models of NFU in bulk. Using well-known techniques of model theory, one can construct a nonstandard model of Zermelo set theory (nothing nearly as strong as full ZFC is needed for the basic technique) on which there is an external automorphism j (not a set of the model) which moves a rank of the cumulative hierarchy of sets. We may suppose without loss of generality that .
The domain of the model of NFU will be the nonstandard rank . The basic idea is that the automorphism j codes the "power set" of our "universe" into its externally isomorphic copy inside our "universe." The remaining objects not coding subsets of the universe are treated as urelements. Formally, the membership relation of the model of NFU will be
It may now be proved that this actually is a model of NFU. Let be a stratified formula in the language of NFU. Choose an assignment of types to all variables in the formula which witnesses the fact that it is stratified. Choose a natural number N greater than all types assigned to variables by this stratification. Expand the formula into a formula in the language of the nonstandard model of Zermelo set theory with automorphism j using the definition of membership in the model of NFU. Application of any power of j to both sides of an equation or membership statement preserves its truth value because j is an automorphism. Make such an application to each atomic formula in in such a way that each variable x assigned type i occurs with exactly applications of j. This is possible thanks to the form of the atomic membership statements derived from NFU membership statements, and to the formula being stratified. Each quantified sentence can be converted to the form (and similarly for existential quantifiers). Carry out this transformation everywhere and obtain a formula in which j is never applied to a bound variable. Choose any free variable y in assigned type i. Apply uniformly to the entire formula to obtain a formula in which y appears without any application of j. Now exists (because j appears applied only to free variables and constants), belongs to , and contains exactly those y which satisfy the original formula
in the model of NFU. has this extension in the model of NFU (the application of j corrects for the different definition of membership in the model of NFU). This establishes that Stratified Comprehension holds in the model of NFU.
To see that weak Extensionality holds is straightforward: each nonempty element of inherits a unique extension from the nonstandard model, the empty set inherits its usual extension as well, and all other objects are urelements.
If is a natural number n, one gets a model of NFU which claims that the universe is finite (it is externally infinite, of course). If is infinite and the Choice holds in the nonstandard model of ZFC, one obtains a model of NFU + Infinity + Choice.
Self-sufficiency of mathematical foundations in NFU
For philosophical reasons, it is important to note that it is not necessary to work in ZFC or any related system to carry out this proof. A common argument against the use of NFU as a foundation for mathematics is that the reasons for relying on it have to do with the intuition that ZFC is correct. It is sufficient to accept TST (in fact TSTU). In outline: take the type theory TSTU (allowing urelements in each positive type) as a metatheory and consider the theory of set models of TSTU in TSTU (these models will be sequences of sets (all of the same type in the metatheory) with embeddings of each into coding embeddings of the power set of into in a type-respecting manner). Given an embedding of into (identifying elements of the base "type" with subsets of the base type), embeddings may be defined from each "type" into its successor in a natural way. This can be generalized to transfinite sequences with care.
Note that the construction of such sequences of sets is limited by the size of the type in which they are being constructed; this prevents TSTU from proving its own consistency (TSTU + Infinity can prove the consistency of TSTU; to prove the consistency of TSTU+Infinity one needs a type containing a set of cardinality , which cannot be proved to exist in TSTU+Infinity without stronger assumptions). Now the same results of model theory can be used to build a model of NFU and verify that it is a model of NFU in much the same way, with the 's being used in place of in the usual construction. The final move is to observe that since NFU is consistent, we can drop the use of absolute types in our metatheory, bootstrapping the metatheory from TSTU to NFU.
Facts about the automorphism j
The automorphism j of a model of this kind is closely related to certain natural operations in NFU. For example, if W is a well-ordering in the nonstandard model (we suppose here that we use Kuratowski pairs so that the coding of functions in the two theories will agree to some extent) which is also a well-ordering in NFU (all well-orderings of NFU are well-orderings in the nonstandard model of Zermelo set theory, but not vice versa, due to the formation of urelements in the construction of the model), and W has type α in NFU, then j(W) will be a well-ordering of type T(α) in NFU.
In fact, j is coded by a function in the model of NFU. The function in the nonstandard model which sends the singleton of any element of to its sole element, becomes in NFU a function which sends each singleton {x}, where x is any object in the universe, to j(x). Call this function Endo and let it have the following properties: Endo is an injection from the set of singletons into the set of sets, with the property that Endo( {x} ) = {Endo( {y} ) | y∈x} for each set x. This function can define a type level "membership" relation on the universe, one reproducing the membership relation of the original nonstandard model.
History
In 1914, Norbert Wiener showed how to code the ordered pair as a set of sets, making it possible to eliminate the relation types of Principia Mathematica in favor of the linear hierarchy of sets in TST. The usual definition of the ordered pair was first proposed by Kuratowski in 1921. Willard Van Orman Quine first proposed NF as a way to avoid the "disagreeable consequences" of TST in a 1937 article titled New Foundations for Mathematical Logic; hence the name. Quine extended the theory in his book Mathematical Logic, whose first edition was published in 1940. In the book, Quine introduced the system "Mathematical Logic" or "ML", an extension of NF that included proper classes as well as sets. The first edition's set theory married NF to the proper classes of NBG set theory and included an axiom schema of unrestricted comprehension for proper classes. However, J. Barkley Rosser proved that the system was subject to the Burali-Forti paradox. Hao Wang showed how to amend Quine's axioms for ML so as to avoid this problem. Quine included the resulting axiomatization in the second and final edition, published in 1951.
In 1944, Theodore Hailperin showed that Comprehension is equivalent to a finite conjunction of its instances, In 1953, Ernst Specker showed that the axiom of choice is false in NF (without urelements). In 1969, Jensen showed that adding urelements to NF yields a theory (NFU) that is provably consistent. That same year, Grishin proved NF3 consistent. Specker additionally showed that NF is equiconsistent with TST plus the axiom scheme of "typical ambiguity". NF is also equiconsistent with TST augmented with a "type shifting automorphism", an operation (external to the theory) which raises type by one, mapping each type onto the next higher type, and preserves equality and membership relations.
In 1983, Marcel Crabbé proved consistent a system he called NFI, whose axioms are unrestricted extensionality and those instances of comprehension in which no variable is assigned a type higher than that of the set asserted to exist. This is a predicativity restriction, though NFI is not a predicative theory: it admits enough impredicativity to define the set of natural numbers (defined as the intersection of all inductive sets; note that the inductive sets quantified over are of the same type as the set of natural numbers being defined). Crabbé also discussed a subtheory of NFI, in which only parameters (free variables) are allowed to have the type of the set asserted to exist by an instance of comprehension. He called the result "predicative NF" (NFP); it is, of course, doubtful whether any theory with a self-membered universe is truly predicative. Holmes has shown that NFP has the same consistency strength as the predicative theory of types of Principia Mathematica without the axiom of reducibility.
The Metamath database implemented Hailperin's finite axiomatization for New Foundations. Since 2015, several candidate proofs by Randall Holmes of the consistency of NF relative to ZF were available both on arXiv and on the logician's home page. His proofs were based on demonstrating the equiconsistency of a "weird" variant of TST, "tangled type theory with λ-types" (TTTλ), with NF, and then showing that TTTλ is consistent relative to ZF with atoms but without choice (ZFA) by constructing a class model of ZFA which includes "tangled webs of cardinals" in ZF with atoms and choice (ZFA+C). These proofs were "difficult to read, insanely involved, and involve the sort of elaborate bookkeeping which makes it easy to introduce errors". In 2024, Sky Wilshaw formalized a version of Holmes' proof using the proof assistant Lean, finally resolving the question of NF's consistency. Timothy Chow characterized Wilshaw's work as showing that the reluctance of peer reviewers to engage with a difficult to understand proof can be addressed with the help of proof assistants.
See also
Alternative set theory
Axiomatic set theory
Implementation of mathematics in set theory
Positive set theory
Set-theoretic definition of natural numbers
Notes
References
With discussion by Quine.
Quine, W. V., 1980, "New Foundations for Mathematical Logic" in From a Logical Point of View, 2nd ed., revised. Harvard Univ. Press: 80–101. The definitive version of where it all began, namely Quine's 1937 paper in the American Mathematical Monthly.
External links
"Enriched Stratified systems for the Foundations of Category Theory" by Solomon Feferman (2011)
Stanford Encyclopedia of Philosophy:
Quine's New Foundations — by Thomas Forster.
Alternative axiomatic set theories — by Randall Holmes.
Randall Holmes: New Foundations Home Page.
Randall Holmes: Bibliography of Set Theory with a Universal Set.
Randall Holmes: A new pass at the NF consistency proof
Systems of set theory
Type theory
Urelements
Willard Van Orman Quine | New Foundations | Mathematics | 7,318 |
22,653,593 | https://en.wikipedia.org/wiki/Special%20classes%20of%20semigroups | In mathematics, a semigroup is a nonempty set together with an associative binary operation. A special class of semigroups is a class of semigroups satisfying additional properties or conditions. Thus the class of commutative semigroups consists of all those semigroups in which the binary operation satisfies the commutativity property that ab = ba for all elements a and b in the semigroup.
The class of finite semigroups consists of those semigroups for which the underlying set has finite cardinality. Members of the class of Brandt semigroups are required to satisfy not just one condition but a set of additional properties. A large collection of special classes of semigroups have been defined though not all of them have been studied equally intensively.
In the algebraic theory of semigroups, in constructing special classes, attention is focused only on those properties, restrictions and conditions which can be expressed in terms of the binary operations in the semigroups and occasionally on the cardinality and similar properties of subsets of the underlying set. The underlying sets are not assumed to carry any other mathematical structures like order or topology.
As in any algebraic theory, one of the main problems of the theory of semigroups is the classification of all semigroups and a complete description of their structure. In the case of semigroups, since the binary operation is required to satisfy only the associativity property the problem of classification is considered extremely difficult. Descriptions of structures have been obtained for certain special classes of semigroups. For example, the structure of the sets of idempotents of regular semigroups is completely known. Structure descriptions are presented in terms of better known types of semigroups. The best known type of semigroup is the group.
A (necessarily incomplete) list of various special classes of semigroups is presented below. To the extent possible the defining properties are formulated in terms of the binary operations in the semigroups. The references point to the locations from where the defining properties are sourced.
Notations
In describing the defining properties of the various special classes of semigroups, the following notational conventions are adopted.
For example, the definition xab = xba should be read as:
There exists x an element of the semigroup such that, for each a and b in the semigroup, xab and xba are equal.
List of special classes of semigroups
The third column states whether this set of semigroups forms a variety. And whether the set of finite semigroups of this special class forms a variety of finite semigroups. Note that if this set is a variety, its set of finite elements is automatically a variety of finite semigroups.
References
Algebraic structures
Semigroup theory | Special classes of semigroups | Mathematics | 564 |
36,263,914 | https://en.wikipedia.org/wiki/Radioplayer | Radioplayer is a radio technology platform, owned by UK radio broadcasters and operated under licence in some other countries. It operates an internet radio web tuner, a set of mobile phone apps, an in-car adaptor, and a growing range of integrations with other connected devices and platforms.
Radioplayer is operated by UK Radioplayer Ltd which is a not-for-profit organisation owned by UK radio broadcasters. Initial shareholders were the BBC, Global Radio, GMG Radio, Absolute Radio and RadioCentre. After consolidation in the radio market, current shareholders are the BBC, Global Radio, Bauer Media Group and RadioCentre.
History
Launched in the UK on 31 March 2011, Radioplayer set out to offer a simple and accessible way to listen to radio via the internet. It contained 157 stations at launch.
Initially working internally at the BBC for Tim Davie, then Director of BBC Audio & Music, Michael Hill led the project since March 2009, and was made Managing Director of UK Radioplayer Ltd on 28 July 2010.
At launch, Radioplayer was a simple and straightforward Flash-based radio player, linked-to by radio stations on their own website. The player included searching and bookmarking across all of UK radio station content.
On 5 October 2012, Radioplayer launched a mobile app on iOS phones with an Android version following shortly afterwards. The apps are unavailable for download outside the United Kingdom. This was followed by a tablet app on 25 September 2013.
The apps also support Android Wear, Android Auto, Smart Device Link, Apple Watch and Apple CarPlay. They are also compatible with Chromecast and Airplay.
In September 2016, Radioplayer announced it had been chosen by Amazon to integrate with their new voice-controlled 'Echo' device, ahead of its UK launch. In July 2017, Radioplayer integrated with the Sonos and Bose multi-room speaker platforms.
UK Radioplayer currently contains around 500 UK stations, from Ofcom-licensed broadcasters. Online-only 'sister-stations' can also be added, but only by broadcasters with Ofcom licences which have been on the platform for over a year.
Radioplayer Car
Radioplayer Car was announced in September 2014 as a hybrid radio receiver that switches between FM, DAB and streaming to find the strongest signal. Speaking in Oslo in June 2015, Michael Hill said that he hoped to launch the product in the UK and Norway during the summer of 2015.
In February 2017, Radioplayer Car was launched. It was marketed as the world’s first voice-controlled hybrid radio adaptor for car stereos.
A small box, fitted behind the dashboard, links to the auxiliary input on an existing car radio. It connects wirelessly via Bluetooth to the driver’s smartphone by an app. The adaptor enabled drivers to listen to their own smartphone music collections using Bluetooth, take hands-free calls, listen to inbound text messages and receive instant audio travel news, customised by GPS to their location and direction of travel.
The hardware was manufactured under licence by car audio interfaces supplier Connects2, and Hyde Park Corner was promoted as the preferred installer of the audio equipment.
There were several spin-off benefits of the Radioplayer Car project, including the creation of the hybrid radio metadata API for cars, known as the 'WRAPI' (Worldwide Radioplayer API).
International
Through a separate company called Radioplayer Worldwide, Radioplayer technology is licensed to a number of different territories.
Notes
References
External links
UK Radioplayer website
Radioplayer Worldwide
iPhone and iPad app
Android app
Amazon Kindle Fire app
Mobile applications
Radio technology
Android Auto software | Radioplayer | Technology,Engineering | 743 |
235,893 | https://en.wikipedia.org/wiki/Submersible%20pump | A submersible pump (or electric submersible pump (ESP) is a device which has a hermetically sealed motor close-coupled to the pump body. The whole assembly is submerged in the fluid to be pumped. The main advantage of this type of pump is that it prevents pump cavitation, a problem associated with a high elevation difference between the pump and the fluid surface. Submersible pumps push fluid to the surface, rather than jet pumps, which create a vacuum and rely upon atmospheric pressure. Submersibles use pressurized fluid from the surface to drive a hydraulic motor downhole, rather than an electric motor, and are used in heavy oil applications with heated water as the motive fluid.
History
In. 1928 Armenian oil delivery system engineer and inventor Armais Arutunoff successfully installed the first submersible oil pump in an oil field. In 1929, Pleuger Pumps (today Pleuger Industries) developed the design of the submersible turbine pump, the forerunner of the modern multi-stage submersible pump.
Working principle
Electric submersible pumps are multistage centrifugal pumps operating in a vertical position. Liquids, accelerated by the impeller, lose their kinetic energy in the diffuser, where a conversion of kinetic to pressure energy takes place. This is the main operational mechanism of radial and mixed flow pumps. In the HSP, the motor is a hydraulic motor rather than an electrical motor, and may be closed cycle (keeping the power fluid separate from the produced fluid) or open cycle (mingling the power fluid with the produced fluid downhole, with surface separation).
The pump shaft is connected to the gas separator or the protector by a mechanical coupling at the bottom of the pump. Fluids
enter the pump through an intake screen and are lifted by the pump stages. Other parts include the radial bearings (bushings) distributed along the length of the shaft, providing radial support to the pump shaft. An optional thrust bearing takes up part of the axial forces arising in the pump, but most of those forces are absorbed by the protector's thrust bearing.
There are also screw-type submersible pumps, there is a steel screw which is used as a working element in them. The screw allows the pump to work in water with a high sand content and other mechanical impurities.
Applications
Submersible pumps are found in many applications. Single stage pumps are used for drainage, sewage pumping, general industrial pumping and slurry pumping. They are also popular with Pond filters. Multiple stage submersible pumps are typically lowered down a borehole, and most typically used for residential, commercial, municipal and industrial water extraction (abstraction), water wells and in oil wells.
Other uses for submersible pumps include sewage treatment plants, seawater handling, fire fighting (since it is flame retardant cable), water well and deep well drilling, offshore drilling rigs, artificial lifts, mine dewatering, and irrigation systems.
Pumps in electrical hazardous locations used for combustible liquids or for water that may be contaminated with combustible liquids must be designed not to ignite the liquid or vapors.
Use in oil wells
Submersible pumps are used in oil production to provide a relatively efficient form of "artificial lift", able to operate across a broad range of flow rates and depths. By decreasing the pressure at the bottom of the well (by lowering bottom-hole flowing pressure, or increasing drawdown), significantly more oil can be produced from the well when compared with natural production. The pumps are typically electrically powered, referred to as Electrical Submersible Pumps (ESP) or if hydraulically powered, referred to as Hydraulic Submersible Pumps (HSP).
ESP systems consist of both surface components (housed in the production facility, for example an oil platform), and sub-surface components (found in the well hole). Surface components include the motor controller (often a variable speed controller), surface cables and transformers. The subsurface components are deployed by attaching to the downhole end of a tubing string, while at the surface, and then lowered into the well bore along with the tubing.
A high-voltage (3 to 5 kV) alternating-current source at the surface drives the subsurface motor. Until recently, ESPs had been costly to install due to the requirement of an electric cable extending from the source to the motor. This cable had to be wrapped around jointed tubing and connected at each joint. New coiled tubing umbilicals allow for both the piping and electric cable to be deployed with a single conventional coiled tubing unit. Cables for sensor and control data may also be included.
The subsurface components generally include a pump portion and a motor portion, with the motor downhole from the pump. The motor rotates a shaft that, in turn, rotates pump impellers to lift fluid through production tubing to the surface. These components must reliably work at high temperatures of up to and high pressures of up to , from deep wells of up to deep with high energy requirements of up to 1000 horsepower (750 kW). The pump itself is a multi-stage unit, with the number of stages being determined by the operating requirements. Each stage includes an impeller and diffuser. Each impeller is coupled to the rotating shaft and accelerates fluid from near the shaft radially outward. The fluid then enters a non-rotating diffuser, which is not coupled to the shaft and contains vanes that direct fluid back toward the shaft. Pumps come in diameters from 90 mm (3.5 inches) to 254 mm (10 inches) and vary between and in length. The motor used to drive the pump is typically a three-phase, squirrel cage induction motor, with a nameplate power rating in the range 7.5 kW to 560 kW (at 60 Hz).
ESP assemblies may also include: seals coupled to the shaft between the motor and pump; screens to reject sand; and fluid separators at the pump intake that separate gas, oil and water. ESPs have dramatically lower efficiencies with significant fractions of gas, greater than about 10% volume at the pump intake, so separating gas from oil prior to the pump can be important. Some ESPs include a water/oil separator which permits water to be re-injected downhole. As some wells produce up to 90% water, and fluid lift is a significant cost, re-injecting water before lifting it to the surface can reduce energy consumption and improve economics Given ESPs' high rotational speed of up to 4000 rpm (67 Hz) and tight clearances, they are not very tolerant of solids, such as sand.
There are at least 15 brands of oilfield ESPs used throughout the world.
Submersible pump cable (SPC)
Submersible pump cables are electrical conductors designed for use in wet ground or under water, with types specialized for pump environmental conditions.
A submersible pump cable is a specialized product to be used for a submersible pump in a deep well, or in similarly harsh conditions. The cable needed for this type of application must be durable and reliable, as the installation location and environment can be extremely restrictive as well as hostile. As such, submersible pump cable can be used in both fresh and salt water. It is also suitable for direct burial and within well castings. A submersible pump cable's area of installation is physically restrictive. Cable manufacturers must keep these factors in mind to achieve the highest possible degree of reliability.
The size and shape of submersible pump cable can vary depending on the usage and preference and pumping instrument of the installer. Pump cables are made in single and multiple conductor types and may be flat or round in cross section; some types include control wires as well as power conductors for the pump motor.
Conductors are often color-coded for identification and an overall cable jacket may also be color-coded.
In 3&4 Core cable as per right side SPC types image shown, plain Copper/Tinned Copper used as a conductor.
PVC 3&4 Core Cable
Flat Cable
Round Cable
Rubber 3&4 Core Cable
Flat Cable
Round Cable
Flat Drincable
HO7RN-F Cable
See also
Centrifugal pump
Eductor-jet pump
Booster pump
Drum pump
References
External links
Versatile Pump Works Under Water, July 1947, Popular Science excellent cutaway drawing of large public water works submersible pump design
Pumps
Irrigation
Oil wells | Submersible pump | Physics,Chemistry | 1,738 |
37,644,961 | https://en.wikipedia.org/wiki/Dally%20%28gene%29 | Dally (division abnormally delayed) is the name of a gene that encodes a HS-modified-protein found in the fruit fly (Drosophila melanogaster). The protein has to be processed after being codified, and in its mature form it is composed by 626 amino acids, forming a proteoglycan rich in heparin sulfate which is anchored to the cell surface via covalent linkage to glycophosphatidylinositol (GPI), so we can define it as a glypican. For its normal biosynthesis it requires sugarless (sgl), a gene that encodes an enzyme which plays a critical role in the process of modification of dally.
Dally’s function
Dally works as a co-receptor of some secreted signaling molecules as fibroblast growth factor, vascular endothelial growth factor, hepatocyte growth factor and members of the Wnt signaling pathway, TGF-b and Hedgehog families. It is also necessary for the cell division patterning during the post-embryonic development of the nervous system.
It is a regulatory component of the Wg receptor and is part of a multiprotein complex together with Frizzled (Fz) transmembrane proteins. Therefore, it regulates two cell growth factors in Drosophila melanogaster, Wingless (Wg) and Decapentaplegic (Dpp). It must be said that in vertebrates the equivalent to Dpp are Bone Morphogenetic Proteins, and the mammalian equal to Wg might be integrin-beta 4. The first one (Wg) controls cell proliferation and differentiation during embryos development, specifically in epidermis, whereas the latter (Dpp) plays a role in the imaginal discs’ growth.
Dpp and Wg are mutually antagonistic in patterning genitalia. Concretely, dally selectively regulates both Wg signalling in epidermis and Dpp in genitalia. This selectivity is supposed to be controlled by the type of Glycosaminoglycan GAG bonded to the dally protein, considering that there is a huge structural variety in GAGs.
Tissue malformations occur in various situations. As said in the introduction, the sgl enzyme is essential for a normal biosynthesis of dally. That is why the absence or malfunction of this enzyme doesn’t allow the correct Wg and Dpp signalling. Also the expression of mutated dally proteins alters Wnt signalling pathways, which leads to anomalies in Drosophila melanogaster’s eye, antennal, genital, wing and neural morphogenesis.
Gene location
Dally's gene was located in the chromosome 3, concretely in the region 3L 8820605-8884292.
Mutations and its effects
The mutation of Dally is a consequence of the P-element and the place where it is located. It is possible to differentiate between the mutants Dally-P1 and Dally-P2, depending on where the insertion of P-element is. It is known that Dally-P2 generates a bigger amount of defects. This mutated Dally disrupts the cell cycle progression, delaying the process during the G2-mitosis transition. As a matter of fact, mutations affecting Dally disrupt patterning of many tissues, for instance of the nervous system. Dally mutants display cell cycle progression defects in specific sets of dividing cells. Those mutations are pleiotropic and can affect viability and produce morphological defects in several adult tissues, such as the eye, antenna, wing and genitalia.
Treatment
Once the mutation has been codified and the protein is functional, there is no chance to turn back and we will speak about a mutated individual. However, if the mutant dally is codified but it is not performing its function yet, a chaperone can identify it and try to correct the mutation, or directly send it to a proteasome using ubiquitins and degrade it.
Notwithstanding, there is another possible solution when malformations have occurred as a result of Wg activity loss. Ectopic dally can potentiate Wg signaling but this effect is dependent on some Wg activity remaining at the cell surface. Moreover, ectopic expression of dally+ from hs-dally+ transgene, stimulates Wg signaling. Thus, naked larval cuticle loss is recuperated and once the larva has become an adult, its tissues execute their normal function. Despite this fact, an intense expression of dally+ results in the death of most of the Drosophila melanogaster’s embryos.
References
External links
Mammalian homologue of Wg that might be integrin-beta 4
Wnt signaling pathway - Homo sapiens (human)
Further reading
Biochemistry | Dally (gene) | Chemistry,Biology | 1,023 |
63,942,140 | https://en.wikipedia.org/wiki/Mechanical%20Workshops%20Wilhelm%20Albrecht | The Mechanical Workshops Wilhelm Albrecht (MWA) () were founded in 1926 by the innovator, engineer, and entrepreneur, Wilhelm Albrecht in Berlin-Tempelhof. The logo he designed became an internationally known trademark for complete device systems for image-synchronous sound recording and processing in film and television studios since the 1950s.
History of MWA
Early years
Initially, the company developed and produced kits for radio receivers and supplied them to end users, subsequently – in advanced versions - to industrial radio manufacturers such as Blaupunkt.
In 1936, the company moved to larger premises at Juliusstrasse in Berlin's Neukölln district. In the following years, development and manufacturing concentrated on equipment for communication technology. In 1944, the factory was partially damaged by a bomb attack.
The first post-war years
In the remaining workshops and with the inventory of materials and machinery saved after the end of the war, items for everyday’s use at that time were manufactured (e. g. tobacco cutting machines). In addition, repairs of damaged industrial equipment were carried out.
Entry into film sound engineering
In this context, Albrecht came into contact with remaining companies of the Berlin film industry. As early as 1946, MWA received an order from Berlin-based Kaudel-Film to design and manufacture an optical sound camera (LTK 1) and other devices for sound recording and processing of feature films in sync with the picture. In addition, the company developed and manufactured stationary and portable sound mixing consoles for film studios.
However, Albrecht soon realized that the future of film sound recording and processing would not be the optical sound technology that had been used until then, but the magnetic sound process, which had already been experimented with in the USA. He developed the first magnetic film sound device in Europe, the so-called magnetic sound camera (MTK 1).
Development from 1950
At the beginning of 1950, MWA delivered the MTK 1 to the UFA studios in Berlin-Tempelhof where it was used for dubbing feature films and for sound recording of new productions until 1970. After a public presentation of the MTK 1, the UFA praised it as a “masterpiece of modern film equipment construction, and also in a new area of sound film technology” (see letter from the UfA dated April 4, 1950).
In terms of technology, this was already the breakthrough, but an unrestricted delivery to film studios all over the world could only take place after a protracted patent dispute had ended. Numerous further developments followed, from the MTK 1 to the MR 10 travel model, also supplemented by the KT 2 camera table as well as e. g. amplifiers and sound deleting devices (de-magnetizers). The column-shaped magnetic film recorders/reproducers were a major adaptation to the needs in studios - starting in 1950 with the MB 1, followed by numerous successor models up to the MB 51, which were used in film and television studios worldwide for decades.
In parallel to the constant further development of devices using magnetic sound technology - later also for the new medium of television - Albrecht even participated in significant constructions for phonograph technology .
In 1956, the company was transformed into a GmbH (in English: Ltd.). Wilhelm Albrecht and his wife Helene, who had headed the commercial division since 1945, were appointed managing directors with sole authorization, engineer Günter Kieß was appointed Technical Manager. The latter held this position until 1991. He extensively documented MWA equipment and systems.
In 1961, the company moved to larger premises on Maybachufer in Berlin-Neukölln. After the death of the company’s founder in 1962, his widow Helene Albrecht became general manager, a position which she held until 1974.
The continuous and innovative further development of the sound devices, which were soon supplemented by picture film transports and film scanners as well as complementary control systems was groundbreaking for the changing sophisticated production processes in film and television studios and consolidated the company's position on the international market.
In 1974, Margret Albrecht, daughter of Wilhelm and Helene Albrecht, took over the general management of the company.
After expanding the development capacities – due to its significance supported by state funding for research and development - and enlarging the production premises, as well as optimizing the operational procedures and creating additional sales channels, the production and delivery capacities were doubled within six years. In 1980, Wilhelm Albrecht GmbH was included in the ADAC travel guide listing technical places of interest in Germany.
History from the 80s
In 1984, Helene Kunow-Albrecht and Margret Nilsson-Albrecht sold their shares to Berliner Elektro Beteiligungen, which enabled the successful flotation as Berliner Elektro Holding AG (stock company).
Soon it became evident that digital technology would also largely replace the conventional analogue magnetic sound process. As a result, under the control of engineer Peter Stroetzel (general manager since 1990), the laser optical sound camera (LLK 3) for the production of optical sound negatives was developed, manufactured and, from 1996, sold to film studios and film laboratories all around the world. However, after the traditional medium film with added optical sound track was replaced by digital video technology, this market soon became saturated.
In 2002, Stroetzel, who had taken over the company shares from Berliner Elektro Holding AG in 1997, applied for insolvency, the business was finally taken over by the MWA Nova GmbH. The logo, then over 75 years old, has been preserved.
Literature
1979: Hans Borgelt: Filmstadt Berlin, Nicolaische Verlagsbuchhandlung Berlin, , pp. 58–59 and 60, see: http://d-nb.info/790229765
1980: Willi Paul: Technical Places of Interest in Germany, Volume V, Berlin, ADAC Verlag GmbH,
References
External links
Wilhelm Albrecht and his MWA
MWA - product history
Filmsound Sweden
Sven Fahlén, Europa Film Stockholm
Mobile travel magnetic sound camera (photo, around 1950)
1926 establishments in Germany
1997 disestablishments in Germany
Audio engineering | Mechanical Workshops Wilhelm Albrecht | Engineering | 1,258 |
60,924,277 | https://en.wikipedia.org/wiki/NGC%204800 | NGC 4800 is an isolated spiral galaxy in the constellation Canes Venatici, located at a distance of from the Milky Way. It was discovered by William Herschel on April 1, 1788. The morphological classification of this galaxy is SA(rs)b, indicating a spiral galaxy with no visual bar at the nucleus (SA), an incomplete ring structure (rs), and moderately-tightly wound spiral arms (b). The galactic plane is inclined to the line of sight by an angle of 43°, and the long axis is oriented along a position angle of 25°. There is a weak bar structure at the nucleus that is visible in the infrared.
The galaxy has a low-luminosity active galactic nucleus with an HII region at the core. The circumnuclear zone contains a double ring structure of "ultra-compact nuclear rings"; the inner ring has a radius of 30 pc and the outer ring's radius is about 130 pc. The upper limit on the mass of the central supermassive black hole is estimated as , or 20 million times the mass of the Sun.
References
External links
Unbarred spiral galaxies
Canes Venatici
4800
043931 | NGC 4800 | Astronomy | 246 |
17,701,208 | https://en.wikipedia.org/wiki/Casopitant | Casopitant (), former tentative trade names Rezonic (U.S.) and Zunrisa (Europe), is an NK1 receptor antagonist which was undergoing research for the treatment of chemotherapy-induced nausea and vomiting. It was under development by GlaxoSmithKline. In July 2008, the company filed a marketing authorisation application with the European Medicines Agency. The application was withdrawn and development was discontinued in September 2009 because GlaxoSmithKline decided that further safety assessment was necessary. However, a 2022 review listed casopitant as under development as a potential novel antidepressant for the treatment of major depressive disorder, with a phase 2 clinical trial having been completed.
References
Abandoned drugs
Acetamides
Antiemetics
Fluoroarenes
NK1 receptor antagonists
Piperazines
Piperidines
Trifluoromethyl compounds | Casopitant | Chemistry | 181 |
2,421,084 | https://en.wikipedia.org/wiki/Isotopomer | Isotopomers or isotopic isomers are isomers which differ by isotopic substitution, and which have the same number of atoms of each isotope but in a different arrangement. For example, CH3OD and CH2DOH are two isotopomers of monodeuterated methanol.
The molecules may be either structural isomers (constitutional isomers) or stereoisomers depending on the location of the isotopes. Isotopomers have applications in areas including nuclear magnetic resonance spectroscopy, reaction kinetics, and biochemistry.
Description
Isotopomers or isotopic isomers are isomers with isotopic atoms, having the same number of each isotope of each element but differing in their positions in the molecule. The result is that the molecules are either constitutional isomers or stereoisomers solely based on isotopic location. The term isotopomer was first proposed by Seeman and Paine in 1992 to distinguish isotopic isomers from isotopologues (isotopic homologues).
Examples
CH3CHDCH3 and CH3CH2CH2D are a pair of structural isotopomers of propane.
(R)- and (S)-CH3CHDOH are isotopic stereoisomers of ethanol.
(Z)- and (E)-CH3CH=CHD are examples of isotopic stereoisomers of propene.
Use
13C-NMR
In nuclear magnetic resonance spectroscopy, the highly abundant 12C isotope does not produce any signal whereas the comparably rare 13C isotope is easily detected. As a result, carbon isotopomers of a compound can be studied by carbon-13 NMR to learn about the different carbon atoms in the structure. Each individual structure that contains a single 13C isotope provides data about the structure in its immediate vicinity. A large sample of a chemical contains a mixture of all such isotopomers, so a single spectrum of the sample contains data about all carbons in it. Nearly all of the carbon in normal samples of carbon-based chemicals is 12C, with only about 1% abundance of 13C, so there is only about a 1% abundance of the total of the singly-substituted isotopologues, and exponentially smaller amounts of structures having two or more 13C in them. The rare case where two adjacent carbon atoms in a single structure are both 13C causes a detectable coupling effect between them as well as signals for each one itself. The INADEQUATE correlation experiment uses this effect to provide evidence for which carbon atoms in a structure are attached to each other, which can be useful for determining the actual structure of an unknown chemical.
Reaction kinetics
In reaction kinetics, a rate effect is sometimes observed between different isotopomers of the same chemical. This kinetic isotope effect can be used to study reaction mechanisms by analyzing how the differently massed atom is involved in the process.
Biochemistry
In biochemistry, differences between the isotopomers of biochemicals such as starches is of practical importance in archaeology. They offer clues to the diet of prehistoric humans that lived as long ago as paleolithic times. This is because naturally occurring carbon dioxide contains both 12C and 13C. Monocots, such as rice and oats, differ from dicots, such as potatoes and tree fruits, in the relative amounts of 12CO2 and 13CO2 that they incorporate into their tissues as products of photosynthesis. When tissues of such subjects are recovered, usually tooth or bone, the relative isotopic content can give useful indications of the main source of the staple foods of the subjects of the investigations.
Cumomer
A cumomer is a set of isotopomers sharing similar properties and is a concept that relates to metabolic flux analysis. The concept was developed in 1999. In a metabolic cascade, many molecules will contain the same pattern of isotope labelling. In order to simplify the analysis of such cascades, molecules with identically labelled atoms are aggregated into a virtual molecule called a cumomer (a conflation of cumulative and isotopomer).
See also
Mass (mass spectrometry)
Isotopocule
References
Further reading
Physical chemistry
Isomerism | Isotopomer | Physics,Chemistry | 867 |
39,819,074 | https://en.wikipedia.org/wiki/Acoustic%20droplet%20vaporization | Acoustic droplet vaporization (ADV) is the process by which superheated liquid droplets are phase-transitioned into gas bubbles by means of ultrasound. Perfluorocarbons and halocarbons are often used for the dispersed medium, which forms the core of the droplet. The surfactant, which forms a stabilizing shell around the dispersive medium, is usually composed of albumin or lipids.
There exist two main hypothesis that explain the mechanism by which ultrasound induces vaporization. One poses that the ultrasonic field interacts with the dispersed medium so as to cause vaporization in the bubble core. The other suggests that shockwaves from inertial cavitation, occurring near or within the droplet, cause the dispersed medium to vaporize.
See also
Acoustic droplet ejection
References
Phase transitions | Acoustic droplet vaporization | Physics,Chemistry | 172 |
14,118,993 | https://en.wikipedia.org/wiki/Adenosine%20A2A%20receptor | {{DISPLAYTITLE:Adenosine A2A receptor}}
The adenosine A2A receptor, also known as ADORA2A, is an adenosine receptor, and also denotes the human gene encoding it.
Structure
This protein is a member of the G protein-coupled receptor (GPCR) family which possess seven transmembrane alpha helices, as well as an extracellular N-terminus and an intracellular C-terminus. Furthermore, located in the intracellular side close to the membrane is a small alpha helix, often referred to as helix 8 (H8). The crystallographic structure of the adenosine A2A receptor reveals a ligand binding pocket distinct from that of other structurally determined GPCRs (i.e., the beta-2 adrenergic receptor and rhodopsin). Below this primary (orthosteric) binding pocket lies a secondary (allosteric) binding pocket. The crystal-structure of A2A bound to the antagonist ZM241385 (PDB code: 4EIY) showed that a sodium-ion can be found in this location of the protein, thus giving it the name 'sodium-ion binding pocket'.
Heteromers
The actions of the A2A receptor are complicated by the fact that a variety of functional heteromers composed of a mixture of A2A subunits with subunits from other unrelated G-protein coupled receptors have been found in the brain, adding a further degree of complexity to the role of adenosine in modulation of neuronal activity. Heteromers consisting of adenosine A1/A2A, dopamine D2/A2A and D3/A2A, glutamate mGluR5/A2A and cannabinoid CB1/A2A have all been observed, as well as CB1/A2A/D2 heterotrimers, and the functional significance and endogenous role of these hybrid receptors is still only starting to be unravelled.
The receptor's role in immunomodulation in the context of cancer has suggested that it is an important immune checkpoint molecule.
Function
The gene encodes a protein which is one of several receptor subtypes for adenosine. The activity of the encoded protein, a G protein-coupled receptor family member, is mediated by G proteins which activate adenylyl cyclase, which induce synthesis of intracellular cAMP. The A2A receptor binds with the Gs protein at the intracellular site of the receptor. The Gs protein consists of three subunits; Gsα, Gsβ and Gsγ. A crystal structure of the A2A receptor bound with the agonist NECA and a G protein-mimic has been published in 2016 (PDB code: 5g53).
The encoded protein (the A2A receptor) is abundant in basal ganglia, vasculature, T lymphocytes, and platelets and it is a major target of caffeine, which is a competitive antagonist of this protein.
Physiological role
A1 and A2A receptors are believed to regulate myocardial oxygen demand and to increase coronary circulation by vasodilation. In addition, A2A receptor can suppress immune cells, thereby protecting tissue from inflammation.
The A2A receptor is also expressed in the brain, where it has important roles in the regulation of glutamate and dopamine release, making it a potential therapeutic target for the treatment of conditions such as insomnia, pain, depression, and Parkinson's disease.
Ligands
A number of selective A2A ligands have been developed, with several possible therapeutic applications.
Older research on adenosine receptor function, and non-selective adenosine receptor antagonists such as aminophylline, focused mainly on the role of adenosine receptors in the heart, and led to several randomized controlled trials using these receptor antagonists to treat bradyasystolic arrest.
However the development of more highly selective A2A ligands has led towards other applications, with the most significant focus of research currently being the potential therapeutic role for A2A antagonists in the treatment of Parkinson's disease.
Agonists
Adenosine
ATL-146e
Binodenoson
Cannabidiol
CGS-21680
DPMA (N6-(2-(3,5-dimethoxyphenyl)-2-(2-methylphenyl)ethyl)adenosine)
Limonene
LUF-5833
NECA (5′-(N-ethylcarboxamido)adenosine)
Regadenoson
UK-432,097
YT-146 (2-octynyladenosine)
Zeatin riboside
Antagonists
ATL-444
Caffeine
Istradefylline (KW-6002)
Lu AA41063
Lu AA47070
MSX-2
MSX-3
Preladenant (SCH-420,814)
MSX-3
SCH-58261
SCH-412,348
SCH-442,416
ST-1535
Theophylline
VER-6623
VER-6947
VER-7835
Vipadenant (BIIB-014)
ZM-241,385
Interactions
Adenosine A2A receptor has been shown to interact with Dopamine receptor D2. As a result, Adenosine receptor A2A decreases activity in the Dopamine D2 receptors.
In cancer immunotherapy
The adenosine A2A receptor has also been shown to play a regulatory role in the adaptive immune system. In this role, it functions similarly to programmed cell death-1 (PD-1) and cytotoxic t-lymphocyte associated protein-4 (CTLA-4) receptors, namely to suppress immunologic response and prevent associated tissue damage. Extracellular adenosine gathers in response to cellular stress and breakdown through interactions with hypoxia induced HIF-1α. Abundant extracellular adenosine can then bind to the A2A receptor resulting in a Gs-protein coupled response, resulting in the accumulation of intracellular cAMP, which functions primarily through protein kinase A to upregulate inhibitory cytokines such as transforming growth factor-beta (TGF-β) and inhibitory receptors (i.e., PD-1). Interactions with FOXP3 stimulates CD4+ T-cells into regulatory Treg cells further inhibiting immune response.
Blockade of A2AR has been attempted to various ends, namely cancer immunotherapy. While several A2A receptor antagonists have progressed to clinical trials for the treatment of Parkinson's disease, A2AR blockade in the context of cancer is less characterized. Mice treated with A2AR antagonists, such as ZM241385 (listed above) or caffeine, show significantly delayed tumor growth due to T-cells resistant to inhibition. This is further highlighted by A2AR knockout mice who show increased tumor rejection. Multiple checkpoint pathway inhibition has been shown to have an additive effect, as shown by an increase in response with blockade to PD-1 and CTLA-4 via monoclonal antibodies as compared to the blockade of a single pathway. The A2AR antogonist CPI-444 has shown this in combination with anti-PD-L1 or anti-CTLA-4 treatment as it eliminated tumors in up to 90% of treated mice, including restoration of immune responses in models that incompletely responded to anti-PD-L1 or anti-CTLA-4 monotherapy. Further, tumor growth was fully inhibited when mice with cleared tumors were later rechallenged, indicating that CPI-444 induced systemic antitumor immune memory. Researchers believe that A2AR blockade could increase the efficacy of such treatments even further. Finally, inhibition of A2AR, either through pharmacologic or genetic targeting, in chimeric antigen receptor (CAR) T-cells reveals promising results. Blockade of A2AR in this setting has shown to increase tumor clearance through CAR T-cell therapy in mice. Targeting of the A2A receptor is an attractive option for the treatment of a variety of cancers, especially with the therapeutic success of the blockade of other checkpoint pathways such as PD-1 and CTLA-4.
References
Further reading
External links
Adenosine receptors | Adenosine A2A receptor | Chemistry | 1,720 |
21,442,391 | https://en.wikipedia.org/wiki/Effects%20of%20long-term%20benzodiazepine%20use | The effects of long-term benzodiazepine use include drug dependence as well as the possibility of adverse effects on cognitive function, physical health, and mental health. Long-term use is sometimes described as use not shorter than three months. Benzodiazepines are generally effective when used therapeutically in the short term, but even then the risk of dependency can be significantly high. There are significant physical, mental and social risks associated with the long-term use of benzodiazepines. Although anxiety can temporarily increase as a withdrawal symptom, there is evidence that a reduction or withdrawal from benzodiazepines can lead to a reduction of anxiety symptoms in the long run. Due to these increasing physical and mental symptoms from long-term use of benzodiazepines, slow withdrawal is recommended for long-term users. Not everyone, however, experiences problems with long-term use.
Some of the symptoms that could possibly occur as a result of a withdrawal from benzodiazepines after long-term use include emotional clouding, flu-like symptoms, suicide, nausea, headaches, dizziness, irritability, lethargy, sleep problems, memory impairment, personality changes, aggression, depression, social deterioration as well as employment difficulties, while others never have any side effects from long-term benzodiazepine use. Abruptly or rapidly stopping benzodiazepines can be dangerous; when withdrawing, a gradual reduction in dosage is recommended, under professional supervision.
While benzodiazepines are highly effective in the short term, adverse effects associated with long-term use, including impaired cognitive abilities, memory problems, mood swings, and overdoses when combined with other drugs, may make the risk-benefit ratio unfavourable. In addition, benzodiazepines have reinforcing properties in some individuals and thus are considered to be addictive drugs, especially in individuals that have a "drug-seeking" behavior; further, a physical dependence can develop after a few weeks or months of use. Many of these adverse effects associated with long-term use of benzodiazepines begin to show improvements three to six months after withdrawal.
Other concerns about the effects associated with long-term benzodiazepine use, in some, include dose escalation, benzodiazepine use disorder, tolerance and benzodiazepine dependence and benzodiazepine withdrawal problems. Both physiological tolerance and dependence can be associated with worsening the adverse effects associated with benzodiazepines. Increased risk of death has been associated with long-term use of benzodiazepines in several studies; however, other studies have not found increased mortality. Due to conflicting findings in studies regarding benzodiazepines and increased risks of death including from cancer, further research in long-term use of benzodiazepines and mortality risk has been recommended; most of the available research has been conducted in prescribed users, even less is known about illicit misusers. The long-term use of benzodiazepines is controversial and has generated significant debate within the medical profession. Views on the nature and severity of problems with long-term use of benzodiazepines differ from expert to expert and even from country to country; some experts even question whether there is any problem with the long-term use of benzodiazepines.
Symptoms
Effects of long-term benzodiazepine use may include disinhibition, impaired concentration and memory, depression, as well as sexual dysfunction. The long-term effects of benzodiazepines may differ from the adverse effects seen after acute administration of benzodiazepines. An analysis of cancer patients found that those who took tranquillisers or sleeping tablets had a substantially poorer quality of life on all measurements conducted, as well as a worse clinical picture of symptomatology. Worsening of symptoms such as fatigue, insomnia, pain, dyspnea and constipation was found when compared against those who did not take tranquillisers or sleeping tablets. Most individuals who successfully discontinue hypnotic therapy after a gradual taper and do not take benzodiazepines for 6 months have less severe sleep and anxiety problems, are less distressed and have a general feeling of improved health at 6-month follow-up. The use of benzodiazepines for the treatment of anxiety has been found to lead to a significant increase in healthcare costs due to accidents and other adverse effects associated with the long-term use of benzodiazepines.
Cognitive status
Long-term benzodiazepine use can lead to a generalised impairment of cognition, including sustained attention, verbal learning and memory and psychomotor, visuo-motor and visuo-conceptual abilities. Transient changes in the brain have been found using neuroimaging studies, but no brain abnormalities have been found in patients treated long term with benzodiazepines. When benzodiazepine users cease long-term benzodiazepine therapy, their cognitive function improves in the first six months, although deficits may be permanent or take longer than six months to return to baseline. In the elderly, long-term benzodiazepine therapy is a risk factor for amplifying cognitive decline, although gradual withdrawal is associated with improved cognitive status. A study of alprazolam found that 8 weeks administration of alprazolam resulted in deficits that were detectable after several weeks but not after 3.5 years.
Effect on sleep
Sleep can be adversely affected by benzodiazepine dependence. Possible adverse effects on sleep include induction or worsening of sleep disordered breathing. Like alcohol, benzodiazepines are commonly used to treat insomnia in the short term (both prescribed and self-medicated), but worsen sleep in the long term. Although benzodiazepines can put people to sleep, while asleep, the drugs disrupt sleep architecture, decreasing sleep time, delayed and decreased REM sleep, increased alpha and beta activity, decreased K complexes and delta activity, and decreased deep slow-wave sleep (i.e., NREM stages 3 and 4, the most restorative part of sleep for both energy and mood).
Mental and physical health
The long-term use of benzodiazepines may have a similar effect on the brain as alcohol, and is also implicated in depression, anxiety, post-traumatic stress disorder (PTSD), mania, psychosis, sleep disorders, sexual dysfunction, delirium, and neurocognitive disorders. However a 2016 study found no association between long-term usage and dementia. As with alcohol, the effects of benzodiazepine on neurochemistry, such as decreased levels of serotonin and norepinephrine, are believed to be responsible for their effects on mood and anxiety. Additionally, benzodiazepines can indirectly cause or worsen other psychiatric symptoms (e.g., mood, anxiety, psychosis, irritability) by worsening sleep (i.e., benzodiazepine-induced sleep disorder). These effects are paradoxical to the use of benzodiazepines, both clinically and non-medically, in management of mental health conditions.
Long-term benzodiazepine use may lead to the creation or exacerbation of physical and mental health conditions, which improve after six or more months of abstinence. After a period of about 3 to 6 months of abstinence after completion of a gradual-reduction regimen, marked improvements in mental and physical wellbeing become apparent. For example, one study of hypnotic users gradually withdrawn from their hypnotic medication reported after six months of abstinence that they had less severe sleep and anxiety problems, were less distressed, and had a general feeling of improved health. Those who remained on hypnotic medication had no improvements in their insomnia, anxiety, or general health ratings. A study found that individuals having withdrawn from benzodiazepines showed a marked reduction in use of medical and mental health services.
Approximately half of patients attending mental health services for conditions including anxiety disorders such as panic disorder or social phobia may be the result of alcohol or benzodiazepine dependence. Sometimes anxiety disorders precede alcohol or benzodiazepine dependence but the alcohol or benzodiazepine dependence often acts to keep the anxiety disorders going and often progressively makes them worse. Many people who are addicted to alcohol or prescribed benzodiazepines decide to quit when it is explained to them they have a choice between ongoing ill mental health or quitting and recovering from their symptoms. It was noted that because every individual has an individual sensitivity level to alcohol or sedative hypnotic drugs, what one person can tolerate without ill health will cause another to develop very ill health, and that even moderate drinking in sensitive individuals can cause rebound anxiety syndromes and sleep disorders. A person who experiences the toxic effects of alcohol or benzodiazepines will not benefit from other therapies or medications as they do not address the root cause of the symptoms. Recovery from benzodiazepine dependence tends to take a lot longer than recovery from alcohol, but people can regain their previous good health. A review of the literature regarding benzodiazepine hypnotic drugs concluded that these drugs cause an unjustifiable risk to the individual and to public health. The risks include dependence, accidents and other adverse effects. Gradual discontinuation of hypnotics leads to improved health without worsening of sleep.
Daily users of benzodiazepines are also at a higher risk of experiencing psychotic symptomatology such as delusions and hallucinations. A study found that of 42 patients treated with alprazolam, up to a third of long-term users of the benzodiazepine drug alprazolam (Xanax) develop depression. Studies have shown that long-term use of benzodiazepines and the benzodiazepine receptor agonist nonbenzodiazepine Z drugs are associated with causing depression as well as a markedly raised suicide risk and an overall increased mortality risk.
A study of 50 patients who attended a benzodiazepine withdrawal clinic found that, after several years of chronic benzodiazepine use, a large portion of patients developed health problems including agoraphobia, irritable bowel syndrome, paraesthesiae, increasing anxiety, and panic attacks, which were not preexisting. The mental health and physical health symptoms induced by long-term benzodiazepine use gradually improved significantly over a period of a year following completion of a slow withdrawal. Three of the 50 patients had wrongly been given a preliminary diagnosis of multiple sclerosis when the symptoms were actually due to chronic benzodiazepine use. Ten of the patients had taken drug overdoses whilst on benzodiazepines, despite the fact that only two of the patients had any prior history of depressive symptomatology. After withdrawal, no patients took any further overdoses after one year post-withdrawal. The cause of the deteriorating mental and physical health in a significant proportion of patients was hypothesised to be caused by increasing tolerance where withdrawal-type symptoms emerged, despite the administration of stable prescribed doses. Another theory is that chronic benzodiazepine use causes subtle increasing toxicity, which in turn leads to increasing psychopathology in long-term users of benzodiazepines.
Long-term use of benzodiazepines can induce perceptual disturbances and depersonalization in some people, even in those taking a stable daily dosage, and it can also become a protracted withdrawal feature of the benzodiazepine withdrawal syndrome.
In addition, chronic use of benzodiazepines is a risk factor for blepharospasm. Drug-induced symptoms that resemble withdrawal-like effects can occur on a set dosage as a result of prolonged use, also documented with barbiturate-like substances, as well as alcohol and benzodiazepines. This demonstrates that the effects from chronic use of benzodiazepine drugs are not unique but occur with other GABAergic sedative hypnotic drugs, i.e., alcohol and barbiturates.
Immune system
Chronic use of benzodiazepines seemed to cause significant immunological disorders in a study of selected outpatients attending a psychopharmacology department. Diazepam and clonazepam have been found to have long-lasting, but not permanent, immunotoxic effects in fetuses of rats. However, single very high doses of diazepam have been found to cause lifelong immunosuppression in neonatal rats. No studies have been done to assess the immunotoxic effects of diazepam in humans; however, high prescribed doses of diazepam, in humans, have been found to be a major risk of pneumonia, based on a study of people with tetanus. It has been proposed that diazepam may cause long-lasting changes to the GABAA receptors with resultant long-lasting disturbances to behaviour, endocrine function and immune function.
Suicide and self-harm
Use of prescribed benzodiazepines is associated with an increased rate of suicide or attempted suicide. The prosuicidal effects of benzodiazepines are suspected to be due to a psychiatric disturbance caused by side effects or withdrawal symptoms. Because benzodiazepines in general may be associated with increased suicide risk, care should be taken when prescribing, especially to at-risk patients. Depressed adolescents who were taking benzodiazepines were found to have a greatly increased risk of self-harm or suicide, although the sample size was small. The effects of benzodiazepines in individuals under the age of 18 requires further research. Additional caution is required in using benzodiazepines in depressed adolescents. Benzodiazepine dependence often results in an increasingly deteriorating clinical picture, which includes social deterioration leading to comorbid alcohol use disorder and substance use disorder. Benzodiazepine misuse or misuse of other CNS depressants increases the risk of suicide in drug misusers. Benzodiazepine has several risks based on its biochemical function and symptoms associated with this medication like exacerbation of sleep apnea, sedation, suppression of self-care functions, amnesia and disinhibition are suggested as a possible explanation to the increase in mortality. Studies also demonstrate that an increased mortality associated with benzodiazepine use has been clearly documented among 'drug misusers'.
Carcinogenicity
There has been some controversy around the possible link between benzodiazepine use and development of cancer; early cohort studies in the 1980s suggested a possible link, but follow-up case-control studies have found no link between benzodiazepines and cancer. In the second U.S. national cancer study in 1982, the American Cancer Society conducted a survey of over 1.1 million participants. A markedly increased risk of cancer was found in users of sleeping pills, mainly benzodiazepines. Fifteen epidemiologic studies have suggested that benzodiazepine or nonbenzodiazepine hypnotic drug use is associated with increased mortality, mainly due to increased cancer death. The cancers included cancer of the brain, lung, bowel, breast, and bladder, and other neoplasms. It has been hypothesised that benzodiazepines depress immune function and increase viral infections and could be the cause or trigger of the increased rate of cancer. While initially U.S. Food and Drug Administration reviewers expressed concerns about approving the nonbenzodiazepine Z drugs due to concerns of cancer, ultimately they changed their minds and approved the drugs. A 2017 meta-analysis of multiple observational studies found that benzodiazepine use is associated with increased cancer risk.
Evidence of neurotoxicity
In a study in 1980 in a group of 55 consecutively admitted patients having engaged in non-medical use of exclusively sedatives or hypnotics, neuropsychological performance was significantly lower and signs of intellectual impairment significantly more often diagnosed than in a matched control group taken from the general population. These results suggested a relationship between non-medical use of sedatives or hypnotics and cerebral disorder.
A publication asked in 1981 if lorazepam is more toxic than diazepam.
In a study in 1984, 20 patients having taken long-term benzodiazepines were submitted to brain CT scan examinations. Some scans appeared abnormal. The mean ventricular-brain ratio measured by planimetry was increased over mean values in an age- and sex-matched group of control subjects but was less than that in a group of alcoholics. There was no significant relationship between CT scan appearances and the duration of benzodiazepine therapy. The clinical significance of the findings was unclear.
In 1986, it was presumed that permanent brain damage may result from chronic use of benzodiazepines similar to alcohol-related brain damage.
In 1987, 17 inpatient people who used high doses of benzodiazepines non-medically have anecdotally shown enlarged cerebrospinal fluid spaces with associated cerebral atrophy.
Cerebral atrophy reportedly appeared to be dose dependent with low-dose users having less atrophy than higher-dose users.
However, a CT study in 1987 found no evidence of cerebral atrophy in prescribed benzodiazepine users.
In 1989, in a 4- to 6-year follow-up study of 30 inpatient people who used benzodiazepines non-medically, Neuropsychological function was found to be permanently affected in some people with long-term high dose non-medical use of benzodiazepines. Brain damage similar to alcoholic brain damage was observed. The CT scan abnormalities showed dilatation of the ventricular system. However, unlike people who consume excessive alcohol, people who use sedative hypnotic agents non-medically showed no evidence of widened cortical sulci. The study concluded that, when cerebral disorder is diagnosed in people who use high doses of sedative hypnotic benzodiazepines, it is often permanent.
A CT study in 1993 investigated brain damage in benzodiazepine users and found no overall differences to a healthy control group.
A study in 2000 found that long-term benzodiazepine therapy does not result in brain abnormalities.
Withdrawal from high-dose use of nitrazepam anecdotally was alleged in 2001 to have caused severe shock of the whole brain with diffuse slow activity on EEG in one patient after 25 years of use. After withdrawal, abnormalities in hypofrontal brain wave patterns persisted beyond the withdrawal syndrome, which suggested to the authors that organic brain damage occurred from chronic high-dose use of nitrazepam.
Professor Heather Ashton, a leading expert on benzodiazepines from Newcastle University Institute of Neuroscience, has stated that there is no structural damage from benzodiazepines, and advocates for further research into long-lasting or possibly permanent symptoms of long-term use of benzodiazepines as of 1996. She has stated that she believes that the most likely explanation for lasting symptoms is persisting but slowly resolving functional changes at the GABAA benzodiazepine receptor level. Newer and more detailed brain scanning technologies such as PET scans and MRI scans had as of 2002 to her knowledge never been used to investigate the question of whether benzodiazepines cause functional or structural brain damage.
A 2018 review of the research found a likely causative role between the use of benzodiazepines and an increased risk of dementia, but the exact nature of the relationship is still a matter of debate.
History
Benzodiazepines, when introduced in 1961, were widely believed to be safe drugs but as the decades went by increased awareness of adverse effects connected to their long-term use became known. Recommendations for more restrictive medical guidelines followed. Concerns regarding the long-term effects of benzodiazepines have been raised since 1980. These concerns are still not fully answered. A review in 2006 of the literature on use of benzodiazepine and nonbenzodiazepine hypnotics concluded that more research is needed to evaluate the long-term effects of hypnotic drugs. The majority of the problems of benzodiazepines are related to their long-term use rather than their short-term use. There is growing evidence of the harm of long-term use of benzodiazepines, especially at higher doses. In 2007, the Department of Health recommended that individuals on long-term benzodiazepines be monitored at least every 3 months and also recommended against long-term substitution therapy in benzodiazepine drug misusers due to a lack of evidence base for effectiveness and due to the risks of long-term use. The long-term effects of benzodiazepines are very similar to the long-term effects of alcohol consumption (apart from organ toxicity) and other sedative-hypnotics. Withdrawal effects and dependence are not identical. Dependence can be managed, with a medical professional of course, but withdrawal can be fatal. Physical dependence and withdrawal are very much related but not the same thing. A report in 1987 by the Royal College of Psychiatrists in Great Britain reported that any benefits of long-term use of benzodiazepines are likely to be far outweighed by the risks of long-term use. Despite this benzodiazepines are still widely prescribed. The socioeconomic costs of the continued widespread prescribing of benzodiazepines is high.
Political controversy
In 1980, the Medical Research Council (United Kingdom) recommended that research be conducted into the effects of long-term use of benzodiazepines A 2009 British Government parliamentary inquiry recommended that research into the long-term effects of benzodiazepines must be carried out. The view of the Department of Health is that they have made every effort to make doctors aware of the problems associated with the long-term use of benzodiazepines, as well as the dangers of benzodiazepine drug addiction.
In 1980, the Medicines and Healthcare products Regulatory Agency's Committee on the Safety of Medicines issued guidance restricting the use of benzodiazepines to short-term use and updated and strengthened these warnings in 1988. When asked by Phil Woolas in 1999 whether the Department of Health had any plans to conduct research into the long-term effects of benzodiazepines, the Department replied, saying they have no plans to do so, as benzodiazepines are already restricted to short-term use and monitored by regulatory bodies. In a House of Commons debate, Phil Woolas claimed that there had been a cover-up of problems associated with benzodiazepines because they are of too large of a scale for governments, regulatory bodies, and the pharmaceutical industry to deal with. John Hutton stated in response that the Department of Health took the problems of benzodiazepines extremely seriously and was not sweeping the issue under the carpet. In 2010, the All-Party Parliamentary Group on Involuntary Tranquilliser Addiction filed a complaint with the Equality and Human Rights Commission under the Disability Discrimination Act 1995 against the Department of Health and the Department for Work and Pensions alleging discrimination against people with a benzodiazepine prescription drug dependence as a result of denial of specialised treatment services, exclusion from medical treatment, non-recognition of the protracted benzodiazepine withdrawal syndrome, as well as denial of rehabilitation and back-to-work schemes. Additionally the APPGITA complaint alleged that there is a "virtual prohibition" on the collection of statistical information on benzodiazepines across government departments, whereas with other controlled drugs there are enormous volumes of statistical data. The complaint alleged that the discrimination is deliberate, large scale and that government departments are aware of what they are doing.
Declassified Medical Research Council meeting
The Medical Research Council (UK) held a closed meeting among top UK medical doctors and representatives from the pharmaceutical industry between 30 October 1980 and 3 April 1981. The meeting was classified under the Public Records Act 1958 until 2014 but became available in 2005 as a result of the Freedom of Information Act. The meeting was called due to concerns that 10–100,000 people could be dependent; meeting chairman Professor Malcolm Lader later revised this estimate to include approximately half a million members of the British public suspected of being dependent on therapeutic dose levels of benzodiazepines, with about half of those on long-term benzodiazepines. It was reported that benzodiazepines may be the third- or fourth-largest drug problem in the UK (the largest being alcohol and tobacco). The chairman of the meeting followed up after the meeting with additional information, which was forwarded to the Medical Research Council neuroscience board, raising concerns regarding tests that showed definite cortical atrophy in 2 of 14 individuals tested and borderline abnormality in five others. He felt that, due to the methodology used in assessing the scans, the abnormalities were likely an underestimate, and more refined techniques would be more accurate. Also discussed were findings that tolerance to benzodiazepines can be demonstrated by injecting diazepam into long-term users; in normal subjects, increases in growth hormone occurs, whereas in benzodiazepine-tolerant individuals this effect is blunted. Also raised were findings in animal studies that showed the development of tolerance in the form of a 15 percent reduction in binding capacity of benzodiazepines after seven days administration of high doses of the partial agonist benzodiazepine drug flurazepam and a 50 percent reduction in binding capacity after 30 days of a low dose of diazepam. The chairman was concerned that papers soon to be published would "stir the whole matter up" and wanted to be able to say that the Medical Research Council "had matters under consideration if questions were asked in parliament". The chairman felt that it "was very important, politically that the MRC should be 'one step ahead'" and recommended epidemiological studies be funded and carried out by Roche Pharmaceuticals and MRC sponsored research conducted into the biochemical effects of long-term use of benzodiazepines. The meeting aimed to identify issues that were likely to arise, alert the Department of Health to the scale of the problem and identify the pharmacology and nature of benzodiazepine dependence and the volume of benzodiazepines being prescribed. The World Health Organization was also interested in the problem and it was felt the meeting would demonstrate to the WHO that the MRC was taking the issue seriously. Among the psychological effects of long-term use of benzodiazepines discussed was a reduced ability to cope with stress. The chairman stated that the "withdrawal symptoms from valium were much worse than many other drugs including, e.g., heroin". It was stated that the likelihood of withdrawing from benzodiazepines was "reduced enormously" if benzodiazepines were prescribed for longer than four months. It was concluded that benzodiazepines are often prescribed inappropriately, for a wide range of conditions and situations. Dr Mason (DHSS) and Dr Moir (SHHD) felt that, due to the large numbers of people using benzodiazepines for long periods of time, it was important to determine the effectiveness and toxicity of benzodiazepines before deciding what regulatory action to take.
Controversy resulted in 2010 when the previously secret files came to light over the fact that the Medical Research Council was warned that benzodiazepines prescribed to millions of patients appeared to cause cerebral atrophy similar to hazardous alcohol use in some patients and failed to carry out larger and more rigorous studies. The Independent on Sunday reported allegations that "scores" of the 1.5 million members of the UK public who use benzodiazepines long-term have symptoms that are consistent with brain damage. It has been described as a "huge scandal" by Jim Dobbin, and legal experts and MPs have predicted a class action lawsuit. A solicitor said she was aware of the past failed litigation against the drug companies and the relevance the documents had to that court case and said it was strange that the documents were kept 'hidden' by the MRC.
Professor Lader, who chaired the MRC meeting, declined to speculate as to why the MRC declined to support his request to set up a unit to further research benzodiazepines and why they did not set up a special safety committee to look into these concerns. Professor Lader stated that he regrets not being more proactive on pursuing the issue, stating that he did not want to be labeled as the guy who pushed only issues with benzos. Professor Ashton also submitted proposals for grant-funded research using MRI, EEG, and cognitive testing in a randomized controlled trial to assess whether benzodiazepines cause permanent damage to the brain, but similarly to Professor Lader was turned down by the MRC.
The MRC spokesperson said they accept the conclusions of Professor Lader's research and said that they fund only research that meets required quality standards of scientific research, and stated that they were and continue to remain receptive to applications for research in this area. No explanation was reported for why the documents were sealed by the Public Records Act.
Jim Dobbin, who chaired the All-Party Parliamentary Group for Involuntary Tranquilliser Addiction, stated that:
The legal director of Action Against Medical Accidents said urgent research must be carried out and said that, if the results of larger studies confirm Professor Lader's research, the government and MRC could be faced with one of the biggest group actions for damages the courts have ever seen, given the large number of people potentially affected. People who report enduring symptoms post-withdrawal such as neurological pain, headaches, cognitive impairment, and memory loss have been left in the dark as to whether these symptoms are drug-induced damage or not due to the MRC's inaction, it was reported. Professor Lader reported that the results of his research did not surprise his research group given that it was already known that alcohol could cause permanent brain changes.
Class-action lawsuit
Benzodiazepines spurred the largest-ever class-action lawsuit against drug manufacturers in the United Kingdom, in the 1980s and early 1990s, involving 14,000 patients and 1,800 law firms that alleged the manufacturers knew of the potential for dependence but intentionally withheld this information from doctors. At the same time, 117 general practitioners and 50 health authorities were sued by patients to recover damages for the harmful effects of dependence and withdrawal. This led some doctors to require a signed consent form from their patients and to recommend that all patients be adequately warned of the risks of dependence and withdrawal before starting treatment with benzodiazepines. The court case against the drug manufacturers never reached a verdict; legal aid had been withdrawn, leading to the collapse of the trial, and there were allegations that the consultant psychiatrists, the expert witnesses, had a conflict of interest. This litigation led to changes in British law, making class-action lawsuits more difficult.
Special populations
Neonatal effects
Benzodiazepines have been found to cause teratogenic malformations. The literature concerning the safety of benzodiazepines in pregnancy is unclear and controversial. Initial concerns regarding benzodiazepines in pregnancy began with alarming findings in animals but these do not necessarily cross over to humans. Conflicting findings have been found in babies exposed to benzodiazepines. A recent analysis of the Swedish Medical Birth Register found an association with preterm births, low birth weight and a moderate increased risk for congenital malformations. An increase in pylorostenosis or alimentary tract atresia was seen. An increase in orofacial clefts was not demonstrated, however, and it was concluded that benzodiazepines are not major teratogens.
Neurodevelopmental disorders and clinical symptoms are commonly found in babies exposed to benzodiazepines in utero. Benzodiazepine-exposed babies have a low birth weight but catch up to normal babies at an early age, but smaller head circumferences found in exposed infants persists. Other adverse effects of benzodiazepines taken during pregnancy are deviating neurodevelopmental and clinical symptoms including craniofacial anomalies, delayed development of pincer grasp, deviations in muscle tone and pattern of movements. Motor impairments in the babies are impeded for up to 1 year after birth. Gross motor development impairments take 18 months to return to normal but fine motor function impairments persist. In addition to the smaller head circumference found in benzodiazepine-exposed babies mental retardation, functional deficits, long-lasting behavioural anomalies, and lower intelligence occurs.
Benzodiazepines, like many other sedative hypnotic drugs, cause apoptotic neuronal cell death. However, benzodiazepines do not cause as severe apoptosis to the developing brain as alcohol does. The prenatal toxicity of benzodiazepines is most likely due to their effects on neurotransmitter systems, cell membranes and protein synthesis. This, however, is complicated in that neuropsychological or neuropsychiatric effects of benzodiazepines, if they occur, may not become apparent until later childhood or even adolescence. A review of the literature found data on long-term follow-up regarding neurobehavioural outcomes is very limited. However, a study was conducted that followed up 550 benzodiazepine-exposed children, which found that, overall, most children developed normally. There was a smaller subset of benzodiazepine-exposed children who were slower to develop, but by four years of age most of this subgroup of children had normalised. There was a small number of benzodiazepine-exposed children who had continuing developmental abnormalities at 4-year follow-up, but it was not possible to conclude whether these deficits were the result of benzodiazepines or whether social and environmental factors explained the continuing deficits.
Concerns regarding whether benzodiazepines during pregnancy cause major malformations, in particular cleft palate, have been hotly debated in the literature. A meta analysis of the data from cohort studies found no link but meta analysis of case–control studies did find a significant increase in major malformations. (However, the cohort studies were homogenous and the case–control studies were heterogeneous, thus reducing the strength of the case–control results). There have also been several reports that suggest that benzodiazepines have the potential to cause a syndrome similar to fetal alcohol syndrome, but this has been disputed by a number of studies. As a result of conflicting findings, use of benzodiazepines during pregnancy is controversial. The best available evidence suggests that benzodiazepines are not a major cause of birth defects, i.e. major malformations or cleft lip or cleft palate.
Elderly
Significant toxicity from benzodiazepines can occur in the elderly as a result of long-term use. Benzodiazepines, along with antihypertensives and drugs affecting the cholinergic system, are the most common cause of drug-induced dementia affecting over 10 percent of patients attending memory clinics. Long-term use of benzodiazepines in the elderly can lead to a pharmacological syndrome with symptoms including drowsiness, ataxia, fatigue, confusion, weakness, dizziness, vertigo, syncope, reversible dementia, depression, impairment of intellect, psychomotor and sexual dysfunction, agitation, auditory and visual hallucinations, paranoid ideation, panic, delirium, depersonalization, sleepwalking, aggressivity, orthostatic hypotension and insomnia. Depletion of certain neurotransmitters and cortisol levels and alterations in immune function and biological markers can also occur. Elderly individuals who have been long-term users of benzodiazepines have been found to have a higher incidence of post-operative confusion. Benzodiazepines have been associated with increased body sway in the elderly, which can potentially lead to fatal accidents including falls. Discontinuation of benzodiazepines leads to improvement in the balance of the body and also leads to improvements in cognitive functions in the elderly benzodiazepine hypnotic users without worsening of insomnia.
A review of the evidence has found that whilst long-term use of benzodiazepines impairs memory, its association with causing dementia is not clear and requires further research. A more recent study found that benzodiazepines are associated with an increased risk of dementia and it is recommended that benzodiazepines be avoided in the elderly. A later study, however, found no increase in dementia associated with long-term usage of benzodiazepine.
See also
Long-term effects of alcohol consumption
Benzodiazepine withdrawal syndrome
Benzodiazepine dependence
References
Substance dependence
Drug rehabilitation
Neuropharmacology
Substance-related disorders
Syndromes
Benzodiazepines Long-term Effects
Adverse effects of psychoactive drugs | Effects of long-term benzodiazepine use | Chemistry | 7,819 |
661,947 | https://en.wikipedia.org/wiki/Project%20A119 | Project A119, also known as A Study of Lunar Research Flights, was a top-secret plan developed in 1958 by the United States Air Force. The aim of the project was to detonate a nuclear bomb on the Moon, which would help in answering some of the mysteries in planetary astronomy and astrogeology. If the explosive device detonated on the surface, and not in a lunar crater, the flash of explosive light would have been faintly visible to people on Earth with their naked eye. This was meant as a show of force resulting in a possible boosting of domestic morale in the capabilities of the United States, a boost that was needed after the Soviet Union took an early lead in the Space Race.
The project was never carried out, being cancelled after "Air Force officials decided its risks outweighed its benefits", and because a Moon landing would undoubtedly be a more popular achievement in the eyes of the American and international public alike. If executed, the plan might have led to a potential militarization of space. An identical project by the Soviet Union (Project E-4) also never came to fruition due to fears of the warhead falling back on Soviet territory, and the potential for an international incident.
The existence of the US project was revealed in 2000 by a former executive at the National Aeronautics and Space Administration (NASA), Leonard Reiffel, who had led the project in 1958. A young Carl Sagan was part of the team responsible for predicting the effects of a nuclear explosion in vacuum and low gravity, and evaluating the scientific value of the project. The relevant documents remained secret for nearly 45 years and, despite Reiffel's revelations, the United States government has never officially acknowledged its involvement in the study.
Background
During the Cold War, the Soviet Union took the lead in the Space Race with the launch of Sputnik 1 on 4October 1957. Sputnik was the first artificial satellite in orbit around the Earth, and the surprise of its successful launch, compounded by the resounding failure of Project Vanguard to launch an American satellite after two attempts, had been dubbed the "Sputnik crisis" by the media and was the impetus for the beginning of the Space Race. Trying to reclaim lost ground, the United States embarked on a series of new studies and projects, which eventually included the launch of Explorer 1, the creation of the Defense Advanced Research Projects Agency (DARPA), and NASA.
Project
In 1949, the Armour Research Foundation (ARF), based at the Illinois Institute of Technology, began studying the effects of nuclear explosions on the environment. Those studies continued until 1962. In May 1958, ARF began covertly researching the potential consequences of a nuclear explosion on the Moon. The main objective of the program, running under the auspices of the United States Air Force which had initially proposed it, was to cause a nuclear explosion that would be visible from Earth. It was hoped that such a display would boost the morale of the American people.
At the time of the project's conception, newspapers were reporting a rumor that the Soviet Union was planning to detonate a hydrogen bomb on the Moon. According to press reports in late 1957, an anonymous source had divulged to a United States Secret Service agent that the Soviets planned to commemorate the anniversary of the October Revolution by causing a nuclear explosion on the Moon to coincide with a lunar eclipse on 7 November. News reports of the rumored launch included mention of targeting the dark side of the terminator—Project A119 would also consider this boundary as the target for an explosion. It was also reported that a failure to hit the Moon would likely result in the missile returning to Earth.
A similar idea had been put forward by Edward Teller, the "father of the H-bomb", who, in February 1957, proposed the detonation of nuclear devices both on and some distance from the lunar surface to analyze the effects of the explosion.
Research
A ten-member team, led by Leonard Reiffel, was assembled at the Illinois Institute of Technology in Chicago to study the potential visibility of the explosion, the benefits to science, and the implications for the lunar surface. Among the members of the research team were astronomer Gerard Kuiper and his doctoral student Carl Sagan. Sagan was responsible for the mathematical projection of the expansion of a dust cloud in space around the Moon, an essential element in determining its visibility from Earth.
Scientists initially considered using a hydrogen bomb for the project, but the United States Air Force vetoed that idea due to the weight of such a device, because it would be too heavy to be propelled by the missile which would have been used. It was then decided to use a W25 warhead, a small, lightweight warhead with a relatively low 1.7 kiloton yield. By contrast, the Little Boy bomb dropped on the Japanese city of Hiroshima in 1945 had a yield of 13–18 kilotons. The W25 would be carried by a rocket toward the shadowed side of the Moon where it would detonate on impact. The dust cloud resulting from the explosion would be lit by the Sun and therefore visible from Earth. According to Reiffel, the Air Force's progress in the development of intercontinental ballistic missiles would have made such a launch feasible by 1959.
Cancellation
The project was canceled by the Air Force in January 1959, seemingly out of fear of the risk to the population if anything went wrong with the launch. Another factor, cited by project leader Leonard Reiffel, was the possible problem of nuclear fallout which would affect future lunar research projects and Moon colonization.
Evidence of the Soviet project
Later reports in the 2010s showed that a corresponding Soviet project did indeed exist, although the only official documents on the project found so far began in 1958, not the 1957 date of the "anonymous" source whose rumors initiated the US project. The official Soviet plan also differs from the scenario reported in the press. Started in January 1958, it was part of a series of proposals under the codename "E". Project E-1 entailed plans to reach the Moon, while projects E-2 and E-3 involved sending a probe around the far side of the Moon to take a series of photographs of its surface. The final stage of the project, E-4, was to be a nuclear strike on the Moon, as a display of force. As with the American plan, the E series of projects was canceled while still in its planning stages, due to concerns regarding the safety and reliability of the launch vehicle.
Consequences
The signing of the Partial Nuclear Test Ban Treaty in 1963 and the Outer Space Treaty in 1967 prevented future investigation of the concept of detonating a nuclear device on the Moon. By that time, both the United States and the Soviet Union had performed several high-altitude nuclear explosions, including the American Operation Hardtack I, Operation Argus, Operation Fishbowl, and the Soviet Project K.
By 1969, the United States had succeeded in being the first nation to land a man on the moon with the success of the Apollo 11 Moon mission. In December of that year, Apollo scientist Gary Latham suggested detonating a "smallish" nuclear device on the Moon in order to facilitate research into its geological make-up. The idea was dismissed because it would interfere with plans to measure the Moon's natural background radiation.
The existence of Project A119 remained largely secret until the mid-1990s, when writer Keay Davidson discovered the story while researching the life of Carl Sagan for a biography. Sagan's involvement with the project was apparent from his application for an academic scholarship at the Miller Institute of the University of California, Berkeley, in 1959. In the application, Sagan gave details of the project research, which Davidson felt constituted a violation of national security. The leak consisted of Sagan revealing the titles of two classified papers from the A119 project — the 1958 paper Possible Contribution of Lunar Nuclear Weapons Detonations to the Solution of Some Problems in Planetary Astronomy, and the 1959 paper Radiological Contamination of the Moon by Nuclear Weapons Detonations. A 1958 paper titled Cosmic Radiation and Lunar Radioactivity, credited to I. Filosofo, was also named by Sagan in a 1961 paper written for the United States National Research Council. These were among the eight reports created by the project, all of which were destroyed in 1987.
The resulting biography, Carl Sagan: A Life, was published in 1999. Shortly after, a review published in Nature highlighted the discovery of the leaked information. That led Reiffel to break his anonymity and write a letter to the journal confirming that Sagan's activity had at the time been considered a breach of the confidentiality of the project. Reiffel took the opportunity to reveal details of the studies, and his statements were widely reported in the media. Reiffel's revelation of the project was accompanied by his denunciation of the work carried out, with the scientist noting that he was "horrified that such a gesture to sway public opinion was ever considered".
As a result of the publicity the correspondence created, a freedom of information request was lodged concerning Project A119. It was only then that A Study of Lunar Research Flights – Volume I was made public, over 40 years after its inception. A search for the other volumes of documentation revealed that other reports were destroyed in the 1980s by the Illinois Institute of Technology.
David Lowry, a nuclear historian from the United Kingdom, has called the project's proposals "obscene", adding, "had they gone ahead, we would never have had the romantic image of Neil Armstrong taking 'one giant leap for mankind.
See also
LCROSS—A NASA project which used a kinetic impact to study the presence of water on the Moon
Footnotes
References
Cold War history of the United States
Deterrence theory during the Cold War
Exploration of the Moon
Military projects of the United States
Nuclear weapons program of the United States | Project A119 | Engineering | 2,010 |
250,714 | https://en.wikipedia.org/wiki/Kenelm%20Digby | Sir Kenelm Digby (11 July 1603 – 11 June 1665) was an English courtier and diplomat. He was also a highly reputed natural philosopher, astrologer and known as a leading Roman Catholic intellectual and Blackloist. For his versatility, he is described in John Pointer's Oxoniensis Academia (1749) as the "Magazine of all Arts and Sciences, or (as one stiles him) the Ornament of this Nation".
Early life and education
Digby was born at Gayhurst, Buckinghamshire, England. He was of gentry stock, but his family's adherence to Roman Catholicism coloured his career. His father, Sir Everard, was executed in 1606 for his part in the Gunpowder Plot. Kenelm was sufficiently in favour with James I to be proposed as a member of Edmund Bolton's projected Royal Academy (with George Chapman, Michael Drayton, Ben Jonson, John Selden and Sir Henry Wotton). His mother was Mary, daughter of William Mushlo. His uncle, John Digby, was the first Earl of Bristol.
He went to Gloucester Hall, Oxford, in 1618, where he was taught by Thomas Allen, but left without taking a degree. In time Allen bequeathed to Digby his library, and the latter donated it to the Bodleian.
He spent three years on the Continent between 1620 and 1623, where Marie de Medici fell madly in love with him (as he later recounted). In 1623, in Madrid, Digby was appointed to the household of Prince Charles, who had just arrived there. Returning to England the same year, he was knighted by James I and appointed gentleman of the privy chamber to Charles. He was granted a Cambridge Master of Arts on the King's visit to the university in 1624.
Career
Around 1625, he married Venetia Stanley, whose wooing he cryptically described in his memoirs. He had also become a member of the Privy Council of Charles I of England. As his Roman Catholicism hindered appointment to government office, he converted to Anglicanism.
Digby became a privateer in 1627. Sailing his flagship, the Eagle (later renamed Arabella), he arrived off Gibraltar on 18 January and captured several Spanish and Flemish vessels. From 15 February to 27 March he remained at anchor off Algiers due to illness of his men, and extracted a promise from authorities of better treatment of the English ships: he persuaded the city governors to free 50 English slaves. He seized a Dutch vessel near Majorca, and after other adventures gained a victory over the French and Venetian ships in the harbour of Iskanderun on 11 June. His successes, however, brought upon the English merchants the risk of reprisals, and he was urged to depart. He returned to become a naval administrator and later Governor of Trinity House.
His wife Venetia, a noted beauty, died suddenly in 1633, prompting a famous deathbed portrait by Van Dyck and a eulogy by Ben Jonson. (Digby was later Jonson's literary executor. Jonson's poem about Venetia is now partially lost, because of the loss of the centre sheet of a leaf of papers which held the only copy.) Digby, stricken with grief and the object of enough suspicion for the Crown to order an autopsy (rare at the time) on Venetia's body, secluded himself in Gresham College and attempted to forget his personal woes through scientific experimentation and a return to Catholicism. At Gresham College he held an unofficial post, receiving no payment from the college. Digby, alongside Hungarian chemist Johannes Banfi Hunyades, constructed a laboratory under the lodgings of Gresham Professor of Divinity where the two conducted botanical experiments.
At that period, public servants were often rewarded with patents of monopoly; Digby received the regional monopoly of sealing wax in Wales and the Welsh Borders. This was a guaranteed income; more speculative were the monopolies of trade with the Gulf of Guinea and with Canada. These were doubtless more difficult to police.
Marriage and children
Digby married Venetia Stanley in 1625.
They had six sons:
Kenelm Jr. (1625–1648), killed at the Battle of St Neots, 10 July 1648.
John (1627–?), only son to survive Digby. He married and had two daughters.
Everard (1629–1629), died in infancy.
unnamed twins (1632), miscarriage.
George (–1648), died of illness in school.
In addition, there was a daughter, Margery. Born c. 1625, who married Edward Dudley of Clopton and had at least one child. She is never mentioned by Digby in his writings. She may have been the daughter of Edward Sackville, 4th Earl of Dorset and Venetia Stanley prior to her marriage to Sir Kenelm. The Earl of Dorset settled an annuity on her. There is some controversy and confusion about whether or not Venetia had affairs with both the third and fourth Earls of Dorset and, consequently, which Earl was the father of Margery.
Catholicism and Civil War
Digby became a Catholic once more in 1635. He went into voluntary exile in Paris, where he spent most of his time until 1660. There he met both Marin Mersenne and Thomas Hobbes.
Returning to support Charles I in his struggle to establish episcopacy in Scotland (the Bishops' Wars), he found himself increasingly unpopular with the growing Puritan party. In the time between 1639 and 1640, he supported Charles I's expedition against the Presbyterian Scots. He left England for France again in 1641. Following an incident in which he killed a French nobleman, Mont le Ros, in a duel, he returned to England via Flanders in 1642, and was jailed by the House of Commons. He was eventually released at the intervention of Anne of Austria, and went back again to France. He remained there during the remainder of the period of the English Civil War. Parliament declared his property in England forfeit.
Queen Henrietta Maria had fled England in 1644, and he became her Chancellor. He was then engaged in unsuccessful attempts to solicit support for the English monarchy from Pope Innocent X. His son, also called Kenelm, was killed at the Battle of St Neots, 1648. Following the establishment of the Protectorate under Oliver Cromwell, who believed in freedom of conscience, Digby was received by the government as a sort of unofficial representative of English Roman Catholics, and was sent in 1655 on a mission to the Papacy to try to reach an understanding. This again proved unsuccessful.
At the Restoration, Digby found himself in favour with the new regime due to his ties with Henrietta Maria, the Queen Mother. However, he was often in trouble with Charles II, and was once even banished from Court. Nonetheless, he was generally highly regarded until his death, a month before his 62nd birthday, from "the stone", likely caused by kidney stones. He was buried in his wife's tomb (which was damaged in the great fire of 1666), in Christ Church, Newgate Street, London.
Character and works
Digby published a work of apologetics in 1638, A Conference with a Lady about choice of a Religion. In it he argued that the Catholic Church, possessing alone the qualifications of universality, unity of doctrine and uninterrupted apostolic succession, is the only true church, and that the intrusion of error into it is impossible.
Digby was regarded as an eccentric by contemporaries, partly because of his effusive personality, and partly because of his interests in scientific matters. Henry Stubbe called him "the very Pliny of our age for lying". He lived in a time when scientific enquiry had not settled down in any disciplined way. He spent enormous time and effort in the pursuits of astrology, and alchemy which he studied in the 1630s with Van Dyck.
Notable among his pursuits was the concept of the powder of sympathy. This was a kind of sympathetic magic; one manufactured a powder using appropriate astrological techniques, and daubed it, not on the injured part, but on whatever had caused the injury. His book on this mythical salve went through 29 editions. Synchronising the effects of the powder, which allegedly caused a noticeable effect on the patient when applied, was actually suggested in 1687 as a means of solving the longitude problem.
In 1644 he published together two major philosophical treatises, The Nature of Bodies and On the Immortality of Reasonable Souls. The latter was translated into Latin in 1661 by John Leyburn. These Two Treatises were his major natural-philosophical works, and showed a combination of Aristotelianism and atomism.
He was in touch with the leading intellectuals of the time, and was highly regarded by them; he was a founding member of the Royal Society and a member of its governing council from 1662 to 1663. His correspondence with Fermat contains the only extant mathematical proof by Fermat, a demonstration, using his method of descent, that the area of a Pythagorean triangle cannot be a square. His Discourse Concerning the Vegetation of Plants (1661) proved controversial among the Royal Society's members. It was published in French in 1667. Digby is credited with being the first person to note the importance of "vital air", or oxygen, to the sustenance of plants. He also came up with a crude theory of photosynthesis.
Digby is known for the publication of a cookbook, The Closet of the Eminently Learned Sir Kenelme Digbie Kt. Opened, but it was actually published by a close servant, from his notes, in 1669, several years after his death. It is currently considered an excellent source of period recipes, particularly for beverages such as mead. He tried out many of his recipes on his wife, Venetia, one of which was capons fed on the flesh of vipers.
Digby is also considered the father of the modern wine bottle. During the 1630s, Digby owned a glassworks at Newnham-on-Severn and manufactured glass onions, which were globular in shape with a high, tapered neck, a collar, and a punt. His manufacturing technique involved a coal furnace, made hotter than usual by the inclusion of a wind tunnel, and a higher ratio of sand to potash and lime than was customary. Digby's technique produced wine bottles which were stronger and more stable than most of their day, and which due to their translucent green or brown color protected the contents from light. During his exile and prison term, others claimed his technique as their own, but in 1662 Parliament recognised his claim to the invention as valid.
In fiction
Digby and his wife are the subjects of the 2014 literary novel Viper Wine by Hermione Eyre.
He is mentioned in Nathaniel Hawthorne's novel The Scarlet Letter. In the chapter titled "The Leech", the narrator describes the antagonist, Chillingworth, as having an impressive knowledge of medicine, remarking that Chillingworth claims to have been a colleague of Digby "and other famous men" in the study of natural philosophy. Digby's "scientific attainments" are called "hardly less than supernatural".
Digby also appears in Umberto Eco's novel The Island of the Day Before as "Mr. d'Igby". He explains the principle of his sympathetic powder (unguentum armarium) to the main character.
See also
Digby Mythographer
The Closet of the Eminently Learned Sir Kenelme Digbie Kt. Opened – a 1669 cookery book supposedly based on Digby's writings
References
Further reading
Bligh, E. W. Sir Kenelm Digby and his Venetia, London: S. Low, Marston, 1932
Fulton, John Farquhar. Sir Kenelm Digby: Writer, Bibliophile and Protagonist of William Harvey, New York: Oliver, 1937
Longueville, Thomas. The Life of Sir Kenelm Digby Longmans, Green, and Co., 1896
Peterson, Robert T. Sir Kenelm Digby, the Ornament of England, 1603–1665, Cambridge, Mass.: Harvard University Press, 1956.
Gabrieli, V. Sir Kenelm Digby. Un inglese italianato nell' etá della contrariforma. Roma, 1957
L. Georgescu/H. Adriaenssen (eds.), The philosophy of Kenelm Digby (1602-1665), Heidelberg 2022
External links
Digby's Observations upon Religio Medici
The Extraordinary Streetfight of Kenelm Digby, The Association of Renaissance Martial Arts
Mortimer Rare Book Room, Smith College
A short extract from one of Digby's books on alchemy
Medicina experimentalis Digbaeana, das ist: Außerlesene und bewährte Artzeney-Mittel : auß weiland Herrn Grafen Digby ... Manuscriptis zusammen gebracht; übers. und an Tag gegeben . Bd. 1–2 . Zubrodt, Franckfurt Nunmehro ... übersehen und ... verm. 1676 Digital edition by the University and State Library Düsseldorf
SIR KENELM DIGBY 1603-1665, Resources and References by John Sutton
1603 births
1665 deaths
17th-century alchemists
17th-century astrologers
17th-century English philosophers
Alumni of Gloucester Hall, Oxford
Converts to Anglicanism from Roman Catholicism
Converts to Roman Catholicism from Anglicanism
English alchemists
English astrologers
English duellists
English knights
English Roman Catholics
Knights Bachelor
Original fellows of the Royal Society
People from the Borough of Milton Keynes | Kenelm Digby | Chemistry | 2,826 |
185,263 | https://en.wikipedia.org/wiki/Haloperidol | Haloperidol, sold under the brand name Haldol among others, is a typical antipsychotic medication. Haloperidol is used in the treatment of schizophrenia, tics in Tourette syndrome, mania in bipolar disorder, delirium, agitation, acute psychosis, and hallucinations from alcohol withdrawal. It may be used by mouth or injection into a muscle or a vein. Haloperidol typically works within 30 to 60 minutes. A long-acting formulation may be used as an injection every four weeks for people with schizophrenia or related illnesses, who either forget or refuse to take the medication by mouth.
Haloperidol may result in a movement disorder known as tardive dyskinesia, which may be permanent. Neuroleptic malignant syndrome and QT interval prolongation may occur, the latter particularly with IV administration. In older people with psychosis due to dementia it results in an increased risk of death. When taken during pregnancy it may result in problems in the infant. It should not be used by people with Parkinson's disease.
Haloperidol was discovered in 1958 by the team of Paul Janssen, prepared as part of a structure-activity relationship investigation into analogs of pethidine (meperidine). It is on the World Health Organization's List of Essential Medicines. It is the most commonly used typical antipsychotic. In 2020, it was the 303rd most commonly prescribed medication in the United States, with more than 1million prescriptions.
Medical uses
Haloperidol is used in the control of the symptoms of:
Acute psychosis, such as drug-induced psychosis caused by ketamine, and phencyclidine, and psychosis associated with high fever or metabolic disease. Some evidence has found haloperidol to worsen psychosis due to psilocybin.
Adjunctive treatment of alcohol and opioid withdrawal
Agitation and confusion associated with cerebral sclerosis
Alcohol-induced psychosis
Hallucinations in alcohol withdrawal
Hyperactive delirium (to control the agitation component of delirium)
Hyperactivity, aggression
Otherwise uncontrollable, severe behavioral disorders in children and adolescents
Schizophrenia
Therapeutic trial in personality disorders, such as borderline personality disorder
Treatment of intractable hiccups
Treatment of neurological disorders, including tic disorders such as Tourette syndrome, and chorea
Treatment of severe nausea and emesis in postoperative and palliative care, especially for palliating adverse effects of radiation therapy and chemotherapy in oncology. Also used as a first line antiemetic for acute cannabinoid hyperemesis syndrome.
As chemical restraint in acute care psychiatry, mainly for violent and self-harming patients (controversial use but very commonly found in movies).
Haloperidol was considered indispensable for treating psychiatric emergency situations. However, the newer atypical drugs have gained a greater role in a number of situations, as outlined in a series of consensus reviews published between 2001 and 2005.
In a 2013 comparison of 15 antipsychotics in schizophrenia, haloperidol demonstrated standard effectiveness. It was 13–16% more effective than ziprasidone, chlorpromazine, and asenapine, approximately as effective as quetiapine and aripiprazole, and 10% less effective than paliperidone. A 2013 systematic review compared haloperidol to placebo in schizophrenia:
In contrast to certain other antipsychotics like risperidone, haloperidol is ineffective as a hallucinogen antidote or "trip killer" in blocking the effects of serotonergic psychedelics like psilocybin and lysergic acid diethylamide (LSD).
Pregnancy and lactation
Data from animal experiments indicate haloperidol is not teratogenic, but is embryotoxic in high doses. In humans, no controlled studies exist. Reports in pregnant women revealed possible damage to the fetus, although most of the women were exposed to multiple drugs during pregnancy. In addition, reports indicate neonates exposed to antipsychotic drugs are at risk for extrapyramidal and/or withdrawal symptoms following delivery, such as agitation, hypertonia, hypotonia, tremor, somnolence, respiratory distress, and feeding disorder. Following accepted general principles, haloperidol should be given during pregnancy only if the benefit to the mother clearly outweighs the potential fetal risk.
Haloperidol is excreted in breast milk. A few studies have examined the impact of haloperidol exposure on breastfed infants and in most cases, there were no adverse effects on infant growth and development.
Other considerations
During long-term treatment of chronic psychiatric disorders, the daily dose should be reduced to the lowest level needed for maintenance of remission. Sometimes, it may be indicated to terminate haloperidol treatment gradually. In addition, during long-term use, routine monitoring including measurement of BMI, blood pressure, fasting blood sugar, and lipids, is recommended due to the risk of side effects.
Other forms of therapy (psychotherapy, occupational therapy/ergotherapy, or social rehabilitation) should be instituted properly.
PET imaging studies have suggested low doses are preferable. Clinical response was associated with at least 65% occupancy of D2 receptors, while greater than 72% was likely to cause hyperprolactinaemia and over 78% associated with extrapyramidal side effects. Doses of haloperidol greater than 5 mg increased the risk of side effects without improving efficacy. Patients responded with doses under even 2 mg in first-episode psychosis. For maintenance treatment of schizophrenia, an international consensus conference recommended a reduction dosage by about 20% every 6 months until a minimal maintenance dose is established.
Depot forms are also available; these are injected deeply intramuscularly at regular intervals. The depot forms are not suitable for initial treatment, but are suitable for patients who have demonstrated inconsistency with oral dosages.
The decanoate ester of haloperidol (haloperidol decanoate, trade names Haldol decanoate, Halomonth, Neoperidole) has a much longer duration of action, so is often used in people known to be noncompliant with oral medication. A dose is given by intramuscular injection once every two to four weeks. The IUPAC name of haloperidol decanoate is [4-(4-chlorophenyl)-1-[4-(4-fluorophenyl)-4-oxobutyl]piperidin-4-yl] decanoate.
Topical formulations of haloperidol should not be used as treatment for nausea because research does not indicate this therapy is more effective than alternatives.
Adverse effects
Sources for the following lists of adverse effects:
As haloperidol is a high-potency typical antipsychotic, it tends to produce significant extrapyramidal side effects. According to a 2013 meta-analysis of the comparative efficacy and tolerability of 15 antipsychotic drugs it was the most prone of the 15 for causing extrapyramidal side effects.
With more than 6 months of use 14 percent of users gain weight. Haloperidol may be neurotoxic.
Prolonged use of the drug can lead to mental dependence.
Common (>1% incidence)
Extrapyramidal side effects including:
Akathisia (motor restlessness)
Dystonia (continuous spasms and muscle contractions)
Muscle rigidity
Parkinsonism (characteristic symptoms such as rigidity)
Hypotension
Anticholinergic side effects such as: (These adverse effects are less common than with lower-potency typical antipsychotics, such as chlorpromazine and thioridazine.)
Blurred vision
Constipation
Dry mouth
Somnolence (which is not a particularly prominent side effect, as is supported by the results of the aforementioned meta-analysis.)
Unknown frequency
Anemia
Headache
Increased respiratory rate
Orthostatic hypotension
Prolonged QT interval
Visual disturbances
Rare (<1% incidence)
Acute hepatic failure
Agitation
Agranulocytosis
Anaphylactic reaction
Anorexia
Bronchospasm
Cataracts
Cholestasis
Confusional state
Depression
Dermatitis exfoliative
Dyspnea
Edema
Extrasystoles
Face edema
Gynecomastia
Hepatitis
Hyperglycemia
Hypersensitivity
Hyperthermia
Hypoglycemia
Hyponatremia
Hypothermia
Increased sweating
Injection site abscess
Insomnia
Itchiness
Jaundice
Laryngeal edema
Laryngospasm
Leukocytoclastic vasculitis
Leukopenia
Liver function test abnormal
Nausea
Neuroleptic malignant syndrome
Neutropenia
Pancytopenia
Photosensitivity reaction
Priapism
Psychotic disorder
Pulmonary embolism
Rash
Retinopathy
Seizure
Sudden death
Tardive dyskinesia
Thrombocytopenia
Torsades de pointes
Urinary retention
Urticaria
Ventricular fibrillation
Ventricular tachycardia
Vomiting
Contraindications
Pre-existing coma, acute stroke
Severe intoxication with alcohol or other central depressant drugs
Known allergy against haloperidol or other butyrophenones or other drug ingredients
Known heart disease, when combined will tend towards cardiac arrest
Special cautions
A multiple-year study suggested this drug and other neuroleptic antipsychotic drugs commonly given to people with Alzheimer's with mild behavioral problems often make their condition worse and its withdrawal was even beneficial for some cognitive and functional measures.
Elderly patients with dementia-related psychosis: analysis of 17 trials showed the risk of death in this group of patients was 1.6 to 1.7 times that of placebo-treated patients. Most of the causes of death were either cardiovascular or infectious in nature. It is not clear to what extent this observation is attributed to antipsychotic drugs rather than the characteristics of the patients. The drug bears a boxed warning about this risk.
Impaired liver function, as haloperidol is metabolized and eliminated mainly by the liver
In patients with hyperthyroidism, the action of haloperidol is intensified and side effects are more likely.
IV injections: risk of hypotension or orthostatic collapse
Patients at special risk for the development of QT prolongation (hypokalemia, concomitant use of other drugs causing QT prolongation)
Patients with a history of leukopenia: a complete blood count should be monitored frequently during the first few months of therapy and discontinuation of the drug should be considered at the first sign of a clinically significant decline in white blood cells.
Pre-existing Parkinson's disease or dementia with Lewy bodies
Interactions
Amiodarone: Q-Tc interval prolongation (potentially dangerous change in heart rhythm).
Amphetamine and methylphenidate: counteracts increased action of norepinephrine and dopamine in patients with narcolepsy or ADD/ADHD
Epinephrine: action antagonized, paradoxical decrease in blood pressure may result
Guanethidine: antihypertensive action antagonized
Levodopa: decreased action of levodopa
Lithium: rare cases of the following symptoms have been noted: encephalopathy, early and late extrapyramidal side effects, other neurologic symptoms, and coma.
Methyldopa: increased risk of extrapyramidal side effects and other unwanted central effects
Other central depressants (alcohol, tranquilizers, narcotics): actions and side effects of these drugs (sedation, respiratory depression) are increased. In particular, the doses of concomitantly used opioids for chronic pain can be reduced by 50%.
Other drugs metabolized by the CYP3A4 enzyme system: inducers such as carbamazepine, phenobarbital, and rifampicin decrease plasma levels and inhibitors such as quinidine, buspirone, and fluoxetine increase plasma levels
Tricyclic antidepressants: metabolism and elimination of tricyclics significantly decreased, increased toxicity noted (anticholinergic and cardiovascular side effects, lowering of seizure threshold)
Potential neurotoxicity
Several lines of evidence suggest that haloperidol exhibits neurotoxicity. Some studies report an association between antipsychotic medications, especially first-generation agents, and a decline in gray matter volume. Haloperidol irreversibly blocks the sigma σ1 receptor. It may exert deleterious effects on the dorsolateral prefrontal cortex (DLPFC) by attenuating brain-derived neurotrophic factor (BDNF) transcription and expression, associated with an increase in the long non-coding RNA BDNF-AS in the DLPFC. Besides the preceding mechanisms, haloperidol metabolizes into HPP+, a monoaminergic neurotoxin related to MPTP. This might be involved in the extrapyramidal symptoms that develop with long-term haloperidol therapy.
Discontinuation
The British National Formulary recommends a gradual withdrawal when discontinuing antipsychotics to avoid acute withdrawal syndrome or rapid relapse. Symptoms of withdrawal commonly include nausea, vomiting, and loss of appetite. Other symptoms may include restlessness, increased sweating, and trouble sleeping. Less commonly there may be a feeling of the world spinning, numbness, or muscle pains. Symptoms generally resolve after a short period of time.
There is tentative evidence that discontinuation of antipsychotics can result in psychosis. It may also result in reoccurrence of the condition that is being treated. Rarely tardive dyskinesia can occur when the medication is stopped.
Overdose
Symptoms
Symptoms are usually due to side effects. Most often encountered are:
Anticholinergic side effects (dry mouth, constipation, paralytic ileus, difficulties in urinating, decreased perspiration)
Coma in severe cases, accompanied by respiratory depression and massive hypotension, shock
Hypotension or hypertension
Rarely, serious ventricular arrhythmia (torsades de pointes), with or without prolonged QT-time
Sedation
Severe extrapyramidal side effects with muscle rigidity and tremors, akathisia, etc.
Treatment
Treatment is mostly symptomatic and involves intensive care with stabilization of vital functions. In early detected cases of oral overdose, induction of emesis, gastric lavage, and the use of activated charcoal can be tried. In the case of a severe overdose, antidotes such as bromocriptine or ropinirole may be used to treat the extrapyramidal effects caused by haloperidol, acting as dopamine receptor agonists. ECG and vital signs should be monitored especially for QT prolongation and severe arrhythmias should be treated with antiarrhythmic measures.
Prognosis
An overdose of haloperidol can be fatal, but in general the prognosis after overdose is good, provided the person has survived the initial phase.
Pharmacology
Haloperidol is a typical butyrophenone-type antipsychotic that exhibits high-affinity dopamine D2 receptor antagonism and slow receptor dissociation kinetics. It has effects similar to the phenothiazines. The drug binds preferentially to D2 and α1 receptors at low dose (ED50 = 0.13 and 0.42 mg/kg, respectively), and 5-HT2 receptors at a higher dose (ED50 = 2.6 mg/kg). Given that antagonism of D2 receptors is more beneficial on the positive symptoms of schizophrenia and antagonism of 5-HT2 receptors on the negative symptoms, this characteristic underlies haloperidol's greater effect on delusions, hallucinations and other manifestations of psychosis. Haloperidol's negligible affinity for histamine H1 receptors and muscarinic M1 acetylcholine receptors yields an antipsychotic with a lower incidence of sedation, weight gain, and orthostatic hypotension though having higher rates of treatment emergent extrapyramidal symptoms.
Pharmacokinetics
By mouth
The bioavailability of oral haloperidol ranges from 60 to 70%. However, there is a wide variance in reported mean Tmax and T1/2 in different studies, ranging from 1.7 to 6.1 hours and 14.5 to 36.7 hours respectively.
Intramuscular injections
The drug is well and rapidly absorbed with a high bioavailability when injected intramuscularly. The Tmax is 20 minutes in healthy individuals and 33.8 minutes in patients with schizophrenia. The mean T1/2 is 20.7 hours. The decanoate injectable formulation is for intramuscular administration only and is not intended to be used intravenously. The plasma concentrations of haloperidol decanoate reach a peak at about six days after the injection, falling thereafter, with an approximate half-life of three weeks.
Intravenous injections
The bioavailability is 100% in intravenous (IV) injection, and the very rapid onset of action is seen within seconds. The T1/2 is 14.1 to 26.2 hours. The apparent volume of distribution is between 9.5 and 21.7 L/kg. The duration of action is four to six hours.
Therapeutic concentrations
Plasma levels of five to 15 micrograms per liter are typically seen for therapeutic response (Ulrich S, et al. Clin Pharmacokinet. 1998). The determination of plasma levels is rarely used to calculate dose adjustments but can be useful to check compliance.
The concentration of haloperidol in brain tissue is about 20-fold higher compared to blood levels. It is slowly eliminated from brain tissue, which may explain the slow disappearance of side effects when the medication is stopped.
Distribution and metabolism
Haloperidol is heavily protein bound in human plasma, with a free fraction of only 7.5 to 11.6%. It is also extensively metabolized in the liver with only about 1% of the administered dose excreted unchanged in the urine. The greatest proportion of the hepatic clearance is by glucuronidation, followed by reduction and CYP-mediated oxidation, primarily by CYP3A4. Haloperidol is metabolized into HPP+, a monoaminergic neurotoxin related to MPTP, by CYP3A enzymes.
Chemistry
Haloperidol is a crystalline material with a melting temperature of 150 °C. This drug has very low solubility in water (1.4 mg/100 mL), but it is soluble in chloroform, benzene, methanol, and acetone. It is also soluble in 0.1 M hydrochloric acid (3 mg/mL) with heating.
History
Haloperidol was discovered by Paul Janssen. It was developed in 1958 at the Belgian company Janssen Pharmaceutica and submitted to the first of clinical trials in Belgium later that year.
Haloperidol was approved by the U.S. Food and Drug Administration (FDA) on 12 April 1967; it was later marketed in the U.S. and other countries under the brand name Haldol by McNeil Laboratories.
Society and culture
Cost
Haloperidol is relatively inexpensive, being up to 100 fold less expensive than newer antipsychotics.
Names
Haloperidol is the INN, BAN, USAN, AAN approved name.
It is sold under the tradenames Aloperidin, Bioperidolo, Brotopon, Dozic, Duraperidol (Germany), Einalon S, Eukystol, Haldol (common tradename in the US and UK), Halol, Halosten, Keselan, Linton, Peluces, Serenace Norodol (Turkey) and Sigaperidol.
Research
Haloperidol was under investigation for the treatment of depression. It was employed as a short-term low-dose dopamine receptor antagonist to upregulate dopamine receptors and produce receptor supersensitivity followed by drug withdrawal as a means of treating depression.
Veterinary use
Haloperidol is also used on many different kinds of animals for nonselective tranquilization and diminishing behavioral arousal, in veterinary and other settings including captivity management.
References
4-Chlorophenyl compounds
4-Fluorophenyl compounds
4-Phenylpiperidines
Antiemetics
Belgian inventions
Butyrophenone antipsychotics
Chemical substances for emergency medicine
CYP2D6 inducers
CYP2D6 inhibitors
Drugs developed by Johnson & Johnson
HERG blocker
Janssen Pharmaceutica
Monoaminergic neurotoxins
NMDA receptor antagonists
Prolactin releasers
Suspected embryotoxicants
Suspected fetotoxicants
Tertiary alcohols
Typical antipsychotics
World Health Organization essential medicines
Wikipedia medicine articles ready to translate | Haloperidol | Chemistry | 4,446 |
19,138,501 | https://en.wikipedia.org/wiki/Hydnellum%20caeruleum | Hydnellum caeruleum, commonly known as the blue-gray hydnellum, blue-green hydnellum, blue spine, blue tooth, or bluish tooth, is an inedible fungus found in North America, Europe, and temperate areas of Asia.
The young caps have shades of blue, gray and brown, with light blue near the margin. The stem is orange to brown. The flesh is blue to black in the cap, and red to brownish in the stem. The blue hues tend to fade with age.
H. aurantiacum is very similar to mature specimens but differs in color. H. suaveolens is similar, with mostly blue flesh and odour of anise.
Taxonomy and phylogeny
Hydnellum caeruleum (Hornem.) P. Karst was first discovered by Jens Wilken Hornemann under the Swedish mycologist Elias Magnus Fries in 1825 and later added to the genus Hydnellum in 1879 by Finnish mycologist Petter Adolf Karsten.
Some past synonyms for the species include Hydnum cyaneotinctum (found in Orris Island, ME, 1903) and Hydnellum/Hydnum/Sarcodon alachuanum (found in Alachua, FL, 1940).
Morphology
H. caeruleum belongs to a historic group of “stipitate hydnoid fungi” or “stalked tooth fungi” due to its morphological appearance with a cap, woody stipe, and distinct toothed hymenium. The fruiting body can be single or fuse with other bodies (called “confluence”) into gregarious or concrescent sporocarps. Due to this growth pattern, twigs and leaves can sometimes become engulfed into the flesh.
The species has a whitish-blue cap, or pileus, which can be convex to planar in shape and is around 8 cm broad. The cap is tomentose, meaning that small dense hairs can make it feel velvety to the touch. The fungus is zonate, with concentric bands of color sometimes apparent on its cap, ranging from white, to grayish violet, to pastel blue zones. When bruised, flesh stains dark inky blue. Similarly, the inner flesh of the fungus appears blue when cut and dulls into a dark gray-blue when dry.
Pale white or gray spines (3-6mm in length) cover the decurrent toothed hymenium on the stipe and underside of the cap. Older samples may have teeth that have turned brown to dark brown. Stipe is central and terete in shape, meaning that it is cylindrical and tapers in width. The base of the stipe is more bulbous and sometimes has an orange coloration.
Climate effects can impact the coloration and defining characteristics of this fungus. During periods of high humidity, H. caeruleum can develop yellow liquid drops on actively growing pilei. Additionally, cool late-September temperatures can lead the fungus to develop deeper blue colors during this time.
Microscopic features
H. caeruleum produces brownish basidiospores which are subglobose, or not quite spherical. The basidiospores are 5-6.2 x 4.5-5.5 μm in size with tuberculate ornamentation. Small spore sizes in this genera mean that the tubercles can only be investigated under electron microscopy. H. caeruleum spores investigated by Grand & Van Dyke had a high tendency of dichotomously branched tubercles, or two tubercles that arose from the same area.
The species has a monomitic hyphal system. Simple-septa generative hyphae makes up the tissues and is rarely found with clamp connections. Basidia are clavate with four sterigmata.
Similar species
H. caeruleum can be discerned from the macroscopically similar Hydnellum suaveolens species based on microscopic features, though their odors are a useful way to identify the species in the field. H. caeruleum has a farinaceous, starchy odor, while H. suaveolens has a minty, fragrant odor.
Ecology
Hydnellum caeruleum is mycorrhizal and often found in the humus beneath conifer trees.
H. caeruleum is an ectomycorrhizal fungus native to temperate regions of Asia, Europe, and North America. The species is commonly found in pine and spruce ecosystems due to its mycorrhizal relationships with coniferous trees. In these relationships, the fungus receives nutrients from the tree and in turn assists the plant in water and mineral uptake. Therefore, it is speculated that this fungus holds important impacts for the field of forestry.
A study from 2012 suggests that stipitate hydnoid fungi such as H. caeruleum can remain in soils 1-4 years after their sporocarps are gon due to persistent below-ground mycelium. This persistence of vegetal mycelium was more important for the survival of the species than the production of sporocarps for sexual reproduction, suggesting that sporocarps may only form under specific advantageous conditions.
A study on stipitate hydnoid fungi in Scottish coniferous forests, which focused on the conservation status and distribution of fungi in these habitats, discovered H. caeruleum making an interesting ectomycorrhizal association. H. caeruleum made an association with an heather shrub (Arctostaphylos uva-ursi) in a treeless site, indicating that the fungus may be able to switch hosts from coniferous trees to shrub species. The study suggested that this finding is important for the field of fungi and tree conservation, as H. caeruleum could survive even after a deforestation event and assist in the eventual reforestation of its habitat.
Relevance to humans
While the fungus is not edible for humans, H. caeruleum’s unique coloration makes it a prized species for mushroom dyers. The fungus can create blue, green, and brown dyes depending on the mordant that is used. Additionally, some bioactive compounds have been isolated from the species including the p-terphenyl compound Aurantiacin and six p-terphenyl derivatives named thelephantins I–N with a known compound, dihydroaurantiacin dibenzoate.
References
External links
Index Fungorum synonyms
Tom Volk's Fungus of the Month pictures and more information
healing-Mushrooms.net description, bioactive compounds and medicinal properties
Inedible fungi
caeruleum
Fungi of Europe
Fungus species | Hydnellum caeruleum | Biology | 1,394 |
5,353,485 | https://en.wikipedia.org/wiki/Thallium%20azide | Thallium azide, , is a yellow-brown crystalline solid poorly soluble in water. Although it is not nearly as sensitive to shock or friction as lead azide, it can easily be detonated by a flame or spark. It can be stored safely dry in a closed non-metallic container.
Preparation and structure
Thallium azide can be prepared treating an aqueous solution of thallium(I) sulfate with sodium azide. Thallium azide will precipitate; the yield can be maximized by cooling.
, , , and adopt the same structures. The azide is bound to eight cations in an eclipsed orientation. The cations are bound to eight terminal N centers.
Safety
All thallium compounds are poisonous and should be handled with care. Azide salts are also roughly as toxic as their corresponding cyanide salts.
References
Thallium(I) compounds
Azides | Thallium azide | Chemistry | 188 |
42,045,837 | https://en.wikipedia.org/wiki/Le%20Bel%E2%80%93Van%20%27t%20Hoff%20rule | In organic chemistry, the Le Bel–Van 't Hoff rule states that the number of stereoisomers of an organic compound containing no internal planes of symmetry is , where represents the number of asymmetric carbon atoms. French chemist Joseph Achille Le Bel and Dutch chemist Jacobus Henricus van 't Hoff both announced this hypothesis in 1874 and that this accounted for all molecular asymmetry known at the time.
As an example, four of the carbon atoms of the aldohexose class of molecules are asymmetric, therefore the Le Bel–Van 't Hoff rule gives a calculation of 24 = 16 stereoisomers. This is indeed the case: these chemicals are two enantiomers each of eight different diastereomers: allose, altrose, glucose, mannose, gulose, idose, galactose, and talose.
References
Stereochemistry
Jacobus Henricus van 't Hoff | Le Bel–Van 't Hoff rule | Physics,Chemistry | 195 |
8,577,896 | https://en.wikipedia.org/wiki/Methane%20reformer | A methane reformer is a device based on steam reforming, autothermal reforming or partial oxidation and is a type of chemical synthesis which can produce pure hydrogen gas from methane using a catalyst. There are multiple types of reformers in development but the most common in industry are autothermal reforming (ATR) and steam methane reforming (SMR). Most methods work by exposing methane to a catalyst (usually nickel) at high temperature and pressure.
Steam reforming
Steam reforming (SR), sometimes referred to as steam methane reforming (SMR) uses an external source of hot gas to heat tubes in which a catalytic reaction takes place that converts steam and lighter hydrocarbons such as methane, biogas or refinery feedstock into hydrogen and carbon monoxide (syngas). Syngas reacts further to give more hydrogen and carbon dioxide in the reactor. The carbon oxides are removed before use by means of pressure swing adsorption (PSA) with molecular sieves for the final purification. The PSA works by adsorbing impurities from the syngas stream to leave a pure hydrogen gas.
CH4 + H2O (steam) → CO + 3 H2 Endothermic
CO + H2O (steam) → CO2 + H2 Exothermic
Autothermal reforming
Autothermal reforming (ATR) uses oxygen and carbon dioxide or steam in a reaction with methane to form syngas. The reaction takes place in a single chamber where the methane is partially oxidized. The reaction is exothermic due to the oxidation.
When the ATR uses carbon dioxide the H2:CO ratio produced is 1:1; when the ATR uses steam the H2:CO ratio produced is 2.5:1
The reactions can be described in the following equations, using CO2:
2 CH4 + O2 + CO2 → 3 H2 + 3 CO + H2O
And using steam:
4 CH4 + O2 + 2 H2O → 10 H2 + 4 CO
The outlet temperature of the syngas is between 950 and 1100 °C and outlet pressure can be as high as 100 bar.
The main difference between SMR and ATR is that SMR only uses oxygen via air for combustion as a heat source to create steam, while ATR directly combusts oxygen. The advantage of ATR is that the H2:CO can be varied, this is particularly useful for producing certain second generation biofuels, such as DME which requires a 1:1 H2:CO ratio.
Partial oxidation
Partial oxidation (POX) is a type of chemical reaction. It occurs when a substoichiometric fuel-air mixture is partially combusted in a reformer, creating a hydrogen-rich syngas which can then be put to further use.
Advantages and disadvantages
The capital cost of steam reforming plants is prohibitive for small to medium size applications because the technology does not scale down well. Conventional steam reforming plants operate at pressures between 200 and 600 psi with outlet temperatures in the range of 815 to 925 °C. However, analyses have shown that even though it is more costly to construct, a well-designed SMR can produce hydrogen more cost-effectively than an ATR for smaller applications.
See also
Catalytic reforming
Industrial gas
Reformed methanol fuel cell
PROX
Partial oxidation
Chemical looping reforming and gasification
References
External links
Harvest Energy Technology, Inc. an Air Products and Chemicals Incorporated company
Hydrogen production
Fuel cells
Chemical equipment
Industrial gases | Methane reformer | Chemistry,Engineering | 703 |
48,896,644 | https://en.wikipedia.org/wiki/Pipethiaden | Pipethiaden is a benzothiepin-based drug candidate that was at one time studied as a potential preventive to reduce the frequency of recurrent migraine headaches. It also has some activity as an antihistamine acting mainly at the 5-HT2A and 5HT2C receptors.
References
Abandoned drugs
5-HT2 antagonists
H1 receptor antagonists
Piperidines | Pipethiaden | Chemistry | 85 |
18,500,711 | https://en.wikipedia.org/wiki/Forensic%20developmental%20psychology | Forensic developmental psychology is a field of psychology that focuses on "children's actions and reactions in a forensic context" and "children's reports that they were victims or witnesses of a crime". Bruck and Poole (2002) first coined the term "forensic developmental psychology". Although forensic developmental psychology specifically focuses on a child's reliability, credibility, and competency in the courtroom setting, it also includes topics such as autobiographical memory, memory distortion, eyewitness identification, narrative construction, personality, and attachment.
Distinction between forensic, developmental, and forensic developmental psychology
Child testimony process
Similar to adults, children who testify must undergo a testimony process in order to determine their relative competency, reliability, and credibility. This is important because trauma resulting from exposure to an open courtroom or confrontation with a defendant can ultimately lead to inaccurate testimony.
There are several similarities and differences between the competency evaluation for adults and for children. Both adults and children must be deemed as competent in order to testify in court. With regards to children, competency refers to a child's capacity and relative intelligence, their ability to distinguish between truth and lie, and their duty to tell the truth. In order to determine a child's competency, four factors may be considered:
the child's ability to distinguish between true and lies along with the duty to speak the truth,
the child's ability to perceive the occurrence accurately during that time,
the child's ability to independently recollect the occurrence and
the child's ability to verbally translate their memory of the occurrence and to answer simple questions about the event.
These guidelines were determined by the Wheeler v. United States (1985) Supreme Court case in which a 5-year-old boy was the only witness to a murder. The boy's testimony was ruled as admissible on the grounds that he was "sufficiently intelligent", could "distinguish between truth and lies", and understand that he was "morally obligated to tell the truth". Although federal guidelines exist for determining a child's competency, the capacities required for a child to be deemed competent also vary from state to state. For example, some states may require a child to be able to differentiate between truth and lie as well as recall past incidents, whereas other states may only require that the child is able to tell the truth.
Along with competency, a child's reliability and credibility must be tested as well. However, the guidelines for a determining a child's reliability and credibility are not as stringent as determining the child's competency. Although it is important to establish a child's relative reliability and credibility for their testimony, a judge cannot bar a witness from testifying on the grounds that he or she is competent but not credible.
Factors impacting children's reports
Although measures exist to try to prevent poor reliability, credibility, and accuracy of children's reports, research of the child testimony process indicates that there are several difficulties that may be associated with the child testimony process, especially with regards to eyewitness testimony. Topics such as language development, memory skills, susceptibility to suggestion, the truth-lie competency, and credibility and deception detection are being researched to determine their impact on a child's competency, reliability, and credibility.
Language development
Individual differences in language development and comprehension may cause difficulties in determining a child's relative competence with the child testimony process and the trial. Although attorneys are required to use language that is developmentally appropriate with young child witnesses, children may still have difficulty understanding the difficult terminology associated with the courtroom. Even if a child's report is accurate, adults can also make inaccurate inferences based on their report. However, some research suggests that the reliability of children's communicative competence can be minimized by better and clearer instructions as well as by more thorough preparation before the trial.
Memory skills
The inconsistency of children's memory potentially creates a problem with the reliability of children's reports. A study done by Klemfuss and Ceci (2012) indicates that "general memory skill is inconsistently associated with children's accuracy". Children younger than the age of 6 also tend to remember a higher proportion of details inaccurately in their reports when compared to children of ages 8 and 10. Along with the problem of poor memory development at a young age, there is a problem with remembering information accurately after a certain period of time. According to Beuscher and Roberts (2005), individuals tend to remember a higher ratio of accurate to inaccurate information over time.
Susceptibility to suggestion
Suggestibility is defined by Ceci and Bruck (1995) as "the degree to which the encoding, storage, retrieval, and reporting of events can be influenced by internal and external factors". Although children's autobiographical recall can be highly accurate in many situations, increased exposure to suggestion can potentially increase the inaccuracy of a child's report. While previous research focused on the impact of a single piece of misinformation on the accuracy of children's reports, current research is now focusing on how multiple suggestive techniques affect the accuracy of children's reports. Ceci & Friedman (2000) suggest that a combination of implicit and explicit suggestive techniques such as bribes, threats, and repetitions of questions can have a large impact on young children's reports. These techniques are especially prevalent when interviewer bias is present during an interview with a child. Interviewer bias refers to when an interviewers' own prejudices or opinions about the event influence the manner in which they conduct the interview, and can occur when interviewers mold the interview to maximize disclosures that are consistent with their beliefs by gathering confirmatory evidence and neglecting disconfirmatory evidence.
Several other factors may contribute to a child's susceptibility to suggestion such as internal or external factors. For example, the child's memory report could have been permanently altered which would be an internal factor, or the child could simply be trying to please the report interviewer or another adult which would be an external factor. Another factor that contributes to increased susceptibility to suggestion is seen through the use of peer pressure. Ceci and Bruck (2002) stated that children who were exposed to higher amounts of peer pressure were more prone to change their perception of the event in question even if their initial report was accurate. Although it is difficult to predict whether or not a child will be more susceptible to suggestion, age and language skills are currently the most reliable predictors of children's resistance to suggestion.
Truth-lie competency
Another difficulty encountered with a child's credibility and reliability in the courtroom setting is truth-lie competency. Truth-lie competency refers to a child's relative accurate conception of the truth, and how a child perceives the truth when compared to an adult's perceptions. In order to determine whether a child is providing truthful testimony, the judge must determine whether the child has an accurate conception of the truth from an adult perspective before the child's testimony. There are three traditional methods of assessing a child's ability to differentiate between truth and lies such as asking the child to (1) define two concepts, (2) explain the difference between truth and lies and (3) identify examples of truth and lies statements.
Although limited, research suggests that young children may have differing definitions of lies than older children and adults. Developmentally inappropriate methods of gauging a child's truth-lie competency could also hinder a child's ability to distinguish between truth and lie. Two specific factors that may also influence a child's definition of a lie include the intention of the speaker and the virulence of what is said. Furthermore, a child's perception of the truth can be influenced by personal gain or reward or by the child's desire to please significant others such as parents, lawyers, or therapists.
Credibility and deception detection
Although the legal system takes care to ensure that children are sufficiently competent before testifying, it is still difficult to determine a child's credibility. Because of the relative difficulty in determining a child's reliability and credibility, few techniques exist to determine a child's ability to recount narratives accurately. One potential method of determining the reliability of a child's report is by the number of "fantastical" or highly implausible or imaginary details within the narrative. According to Bruck, Ceci, & Hembrooke (2002), a higher number of fantastical details are correlated with false narratives. Furthermore, children who describe false narratives tend to creatively utilize incorrect information to construct a false narrative. Research also suggests that the accuracy and credibility of children's reports are closely related when reports are influenced by suggestion.
A study done by Nysse-Carris et al. (2011) had adults rate videos of children's truthfulness and deceitfulness. The study's results indicated that the adults' accuracy was low (only slightly above chance) when rating the children. Furthermore, the study concluded that adults tend to be more biased in labeling children as liars. In general, adults—even adults who are experts in the field—cannot reliably predict the accuracy of a child's report or a child's competence.
False Memories
Dr. Steven Ceci, a child development expert at Cornell University, warns people about the dangers of relying on eyewitness testimony of children without having looked at the information obtained. There’s a possibility that children can create false memories. Children who attended McMartin preschool outside of Los Angeles in the 1980s accused their caregivers of sexual abuse. The children's stories turned out to be false. However, Ceci said that the children in this example, as well as others, were not lying. "Their recollections have been affected by the constant, provocative interviews they were subjected to," he stated, adding that the kids were repeatedly given leading questions.
What experts have to say about child testimony
Expert Testimony: Children as Witnesses
How accurate are children as witnesses in court? Dr. Lyn Haber, Ph.D., goes over the whole child interviewing protocol on how children are asked to testify in court and analyzes the ways in which people could be skeptical on the credibility and reliability of their testimonies. Before questioning the child, some things you might want to be aware of includes who questions the child, the experience of the interview, how many people are asking the questions, where the interview is taking place, and who is present. You should be careful during the questioning protocol, thinking about how the questions are being asked and how the child is answering them. Be mindful of using leading, misleading, biased, and repeated questions. Suggestions on what to ask include who has discussed the crime with the child and how often would they be asking them about it. Some factors included in making the child’s memory more accurate include no one pressuring the child to explain what happened, the child telling the event to a trusted adult immediately, the child being questioned by a trained specialist, and the child testifying under emotionally safe and supported conditions. Some factors that may make the child’s memory inaccurate include the event being frightening, the child reporting the event to an unfamiliar person or authority figure, being questioned in a frightening, unfamiliar setting, and the child being reported to tell the story often.
The Question of Credibility when it comes to Child Witnesses
Sometimes when police come across a crime, they are left with a single eye witness which raises the question of whether they can rely on the child for a reliable testimony. Dr. Steven Ceci is a child development expert at Cornell University and talks about how the memory of children and adults are different based on their past knowledge of things, but that doesn’t mean children are less reliable than adults when it comes to credibility. An instance of this is underlined in the 2002 Samantha Runnion case where the 5-year-old girl was kidnapped just outside her Stanton, Calif., home. The only person around to witness her kidnapping was Sarah Ahn, a playmate. Ahn had given the police a detailed description of the suspect and the car he drove. Her description was so detailed that days later, he was captured. Although, as Ceci says, there is a limit when it comes to validity in statements, especially when the child is a 2 or 3 year old. Ceci also claims that sometimes an adult’s witness can be less reliable than a young kid’s in the sense that they can cheat or lie.
References
Forensic psychology
Developmental psychology | Forensic developmental psychology | Biology | 2,563 |
13,664,796 | https://en.wikipedia.org/wiki/Enation | Enations are scaly leaflike structures, differing from leaves in their lack of vascular tissue. They are created by some leaf diseases and occur normally on Psilotum. Enations are also found on some early plants such as Rhynia, where they are hypothesized to have aided in photosynthesis.
References
Plant morphology
Botanical nomenclature | Enation | Biology | 72 |
32,154,356 | https://en.wikipedia.org/wiki/HTC%20Merge | The HTC ADR6325, originally known as the HTC Lexikon and now known as the HTC Merge, is a device created by HTC. It is identified as a flagship phone carried by US Cellular and Alltel Wireless and came to Verizon Wireless and "C Spire Wireless" in the summer of 2011. It features an 800 MHz processor, 3.8" capacitive touchscreen, 5 Mega Pixel camera with LED flash quality and a 720p camcorder as well as a full QWERTY keyboard and Android 2.2 "Froyo" with the HTC Sense 1.5 User Interface, it also features 2 GB onboard storage.
The HTC Merge is a Multi-band device supporting both GSM and CDMA2000 cellular communications.
Merge | HTC Merge | Technology | 163 |
42,079,634 | https://en.wikipedia.org/wiki/Homes%20%26%20Gardens | Homes & Gardens is a British monthly interior design and garden design magazine published by Future plc. The magazine is based in London and began circulation in 1919. It was the UK’s first home interest magazine. The magazine is marketed to a British audience.
Homes & Gardens is a magazine and website that covers interiors, decorating, gardens, and advice from interior designers. The website has an audience of over 7.5 million monthly readers.
History and profile
The magazine was launched in 1919 at George Newnes Ltd. The magazine is based in London and is published every month. It has been edited by Lucy Searle since 2020.
References
External links
Future plc
Design magazines
Monthly magazines published in the United Kingdom
English-language magazines
Magazines published in London
Magazines established in 1919 | Homes & Gardens | Engineering | 153 |
61,519,578 | https://en.wikipedia.org/wiki/AptarGroup | AptarGroup, Inc., also known as Aptar, is a United States–based global manufacturer of consumer dispensing packaging and drug delivery devices. The group has manufacturing operations in 18 countries.
History
The company began as Werner Die & Stamping in Cary, Illinois, in 1946 and later incorporated as AptarGroup in 1992. Aptar originally developed spray valves and pumps for consumer and household products. The company later began producing nasal administration and pulmonary drug delivery devices such as nasal spray systems and metered-dose inhaler valves. Biotech and pharmaceutical companies use Aptar's different Unidose and Bidose devices for the single or two-shot intranasal delivery of different medicines.
In 2016, Aptar announced that it provided the delivery system for Adapt Pharma's Narcan. Narcan is a naloxone hydrochloride nasal spray used as an emergency treatment for opioid overdoses. Aptar's liquid spray drug delivery technology platform works as a ready-to-use, single-shot, unit-dose system for Narcan. It was the first FDA approved nasally administered, ready-to-use medication used to reverse the effects of an opioid overdose. Narcan does not require any assembly, medical training, or needle injection.
In 2016, Aptar entered into an agreement with Becton Dickinson & Company to develop new self-injection devices.
Aptar entered into an agreement in 2016 with Propeller Health Partners to develop a digitally connected medication inhaler. The company made an investment in Propeller Health Partners (now part of Resmed) in 2018.
In July 2019, the FDA-approved Aptar Pharma's Unidose Powder System as the first intranasally-delivered, needle-free rescue treatment for severe hypoglycemia.
In 2020, during the COVID-19 pandemic, Aptar invested in new tools to accelerate its molding equipment and assembly machines for pumps, but it still wasn't enough to keep up with demand.
Acquisitions
In 2012, Aptar acquired Stelmi, a manufacturer of elastomer primary packaging components. In 2016, Aptar acquired Mega Airless, a manufacturer of airless packaging solutions. In 2018, Aptar acquired CSP Technologies, a material science company that manufactures active packaging solutions.
In June 2019, Aptar acquired two companies, Nanopharm and Gateway Analytical. In November 2019, the company acquired Noble International, which specializes in training devices and patient onboarding. In February 2020, Aptar acquired FusionPKG, a makeup packaging company.
In November 2020, the company acquired the digital respiratory health company Cohero Health.
In July 2021 Aptar acquired the digital therapeutics company, Voluntis (ENXTPA: ALVTX), and 80% of the equity interests of Weihai Hengyu Medical Products Co., Ltd., a Chinese manufacturer of elastomeric and plastic components used in injectable drug delivery.
Sustainability
Aptar was named to Barron's list of the Top 100 Most Sustainable U.S. Companies in 2019, 2020, 2021, and 2022. At the end of 2020, 85% of the company’s global electricity use came from renewable sources. It was also named by Newsweek as one of America's Most Responsible Companies in 2021, 2022, and 2023 and received an A score for climate change from the Climate Disclosure Project.
In September 2019, the company announced a partnership with Loop, a shopping platform from TerraCycle that delivers products in reusable containers. The company made the Forbes Green Growth 50 List in 2021.
References
1992 establishments in Illinois
Drug delivery devices
Packaging companies
Manufacturing
Pumps
Sustainable communities
Companies listed on the New York Stock Exchange
Packaging companies of the United States
Manufacturing companies established in 1992
Companies listed on the Nasdaq
Companies in the S&P 400 | AptarGroup | Physics,Chemistry,Engineering | 796 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.