id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
3,096,407 | https://en.wikipedia.org/wiki/5-Dehydro-m-xylylene | 5-Dehydro-m-xylylene (DMX) is an aromatic organic triradical and the first known organic molecule to violate Hund's Rule.
Its electronic ground state is an open-shell doublet rather than a quartet; that is, the unpaired electrons in the three singly occupied molecular orbitals form a low-spin state in which one electron has its spin-state opposed to the other two. The net result is that there is only one unopposed spin. Hund's rule would predict that the ground state would have all three radical electrons with the same spin-state as each other (none opposed), for a greater total spin. As a result of having non-paired electrons in both spin states coupled together, this compound exhibits antiferromagnetism. Though similar ground states are observed in molecules containing transition metal atoms, it is unprecedented in organic molecules.
The 5-dehydro-m-xylylene anion (DMX−) has also been studied extensively. It has a triplet ground state consisting of a phenyl anion and a m-xylylene biradical.
References
External links
Physics web - Radical molecule breaks the rules
Purdue University Department of Chemistry - ‘Rule-breaking’ molecule
Hydrocarbons
Free radicals | 5-Dehydro-m-xylylene | [
"Chemistry",
"Biology"
] | 268 | [
"Hydrocarbons",
"Free radicals",
"Organic compounds",
"Senescence",
"Biomolecules"
] |
3,096,587 | https://en.wikipedia.org/wiki/Flooding%20%28psychology%29 | Flooding, sometimes referred to as in vivo exposure therapy, is a form of behavior therapy and desensitization – or exposure therapy – based on the principles of respondent conditioning. As a psychotherapeutic technique, it is used to treat phobia and anxiety disorders including post-traumatic stress disorder. It works by exposing the patient to their painful memories, with the goal of reintegrating their repressed emotions with their current awareness. Flooding was invented by psychologist Thomas Stampfl in 1967. It is still used in behavior therapy today.
Flooding is a psychotherapeutic method for overcoming phobias. In order to demonstrate the irrationality of the fear, a psychologist would put a person in a situation where they would face their phobia. Under controlled conditions and using psychologically-proven relaxation techniques, the subject attempts to replace their fear with relaxation. The experience can often be traumatic for a person, but may be necessary if the phobia is causing them significant life disturbances. The advantage to flooding is that it is quick and usually effective. There is, however, a possibility that a fear may spontaneously recur. This can be made less likely with systematic desensitization, another form of a classical condition procedure for the elimination of phobias.
How it works
"Flooding" works on the principles of classical conditioning or respondent conditioning—a form of Pavlov's classical conditioning—where patients change their behaviors to avoid negative stimuli. According to Pavlov, people can learn through associations, so if one has a phobia, it is because one associates the feared stimulus with a negative outcome.
Flooding uses a technique based on Pavlov's classical conditioning that uses exposure. There are different forms of exposure, such as imaginal exposure, virtual reality exposure, and in vivo exposure. While systematic desensitization may use these other types of exposure, flooding uses in vivo exposure, actual exposure to the feared stimulus. A patient is confronted with a situation in which the stimulus that provoked the original trauma is present. The psychologist there usually offers very little assistance or reassurance other than to help the patient to use relaxation techniques in order to calm themselves. Relaxation techniques such as progressive muscle relaxation are common in these kinds of classical conditioning procedures. The theory is that the adrenaline and fear response has a time limit, so a person should eventually have to calm down and realize that their phobia is unwarranted. Flooding can be done through the use of virtual reality and has been shown to be fairly effective in patients with flight phobia.
Psychiatrist Joseph Wolpe (1973) carried out an experiment which demonstrated flooding. He took a girl who was scared of cars, and drove her around for hours. Initially the girl was panicky but she eventually calmed down when she realized that her situation was safe. From then on she associated a sense of ease with cars. Psychologist Aletha Solter used flooding successfully with a 5-month-old infant who showed symptoms of post-traumatic stress following surgery.
Flooding therapy is not for every individual, and the therapist will discuss with the patient the levels of anxiety they are prepared to endure during the session. It may also be true that exposure is not for every therapist and therapists seem to shy away from use of the technique.
See also
Attachment therapy, a controversial autism treatment intended to induce long-term behavioral compliance in children by combining nonconsensual flooding and sensory-overload techniques with the traumatic bonding relationship also manifested in Stockholm syndrome
Behavior modification
Desensitization (psychology)
Habituation
Immersion therapy
Punishment
Sensitization
Systematic desensitization
References
Anxiety disorder treatment
Behavior therapy
Behaviorism | Flooding (psychology) | [
"Biology"
] | 745 | [
"Behavior",
"Behavior therapy",
"Behaviorism"
] |
3,096,721 | https://en.wikipedia.org/wiki/Bit-oriented%20protocol | A bit-oriented protocol is a communications protocol that sees the transmitted data as an opaque stream of bits with no semantics, or meaning. Control codes are defined in terms of bit sequences instead of characters. Bit oriented protocol can transfer data frames regardless of frame contents. It can also be stated as "bit stuffing".
Synchronous framing High-Level Data Link Control may work like this:
Each frame begins and ends with a special bit pattern 01111110, called a flag byte.
A bit stuffing technique is used to prevent the receiver from detecting the special flag byte in user data e.g. whenever the sender's data link layer encounters 5 consecutive 1 (one) in the data, it automatically stuffs 0 into the outgoing stream.
See also
Byte-oriented protocol
References
Linktionary page for bit-oriented protocol
Data transmission
Telecommunication protocols | Bit-oriented protocol | [
"Technology"
] | 174 | [
"Computing stubs",
"Computer science",
"Computer science stubs"
] |
3,096,890 | https://en.wikipedia.org/wiki/Minor%20actinide | A minor actinide is an actinide, other than uranium or plutonium, found in spent nuclear fuel. The minor actinides include neptunium (element 93), americium (element 95), curium (element 96), berkelium (element 97), californium (element 98), einsteinium (element 99), and fermium (element 100). The most important isotopes of these elements in spent nuclear fuel are neptunium-237, americium-241, americium-243, curium-242 through -248, and californium-249 through -252.
Plutonium and the minor actinides will be responsible for the bulk of the radiotoxicity and heat generation of spent nuclear fuel in the long term (300 to 20,000 years in the future).
The plutonium from a power reactor tends to have a greater amount of plutonium-241 than the plutonium generated by the lower burnup operations designed to create weapons-grade plutonium. Because the reactor-grade plutonium contains so much 241Pu, the presence of 241Am makes the plutonium less suitable for making a nuclear weapon. The ingrowth of americium in plutonium is one of the methods for identifying the origin of an unknown sample of plutonium and the time since it was last separated chemically from the americium.
Americium is commonly used in industry as both an alpha particle source and as a low photon-energy gamma radiation source. For example, it is commonly used in smoke detectors. Americium can be formed by neutron capture of 239Pu and 240Pu, forming 241Pu which then beta decays to 241Am. In general, as the energy of the neutrons increases, the ratio of the fission cross section to the neutron capture cross section changes in favour of fission. Hence, if MOX is used in a thermal reactor such as a boiling water reactor (BWR) or pressurized water reactor (PWR) then more americium can be expected to be found in the spent fuel than in that from a fast neutron reactor.
Some of the minor actinides have been found in fallout from bomb tests. See Actinides in the environment for details.
References
Nuclear materials | Minor actinide | [
"Physics"
] | 472 | [
"Materials",
"Nuclear materials",
"Matter"
] |
3,096,911 | https://en.wikipedia.org/wiki/Feist%E2%80%93Benary%20synthesis | The Feist–Benary synthesis is an organic reaction between α-halo ketones and β-dicarbonyl compounds to produce substituted furan compounds. This condensation reaction is catalyzed by amines such as ammonia and pyridine. The first step in the ring synthesis is related to the Knoevenagel condensation. In the second step the enolate displaces an alkyl halogen in a nucleophilic aliphatic substitution.
Modifications
In place of α-haloketones, propargyl sulfonium salts can be used to alkylate the diketone.
Another modification is the enantioselective interrupted Feist-Benary reaction with a chiral auxiliary based on the cinchona alkaloid quinine based in the presence of proton sponge to the hydroxydihydrofuran. This type of alkaloids is also used in asymmetric synthesis in the AD-mix. The alkaloid is protonated throughout the reaction and transfers its chirality by interaction of the acidic ammonium hydrogen with the dicarbonyl group of ethyl bromopyruvate in a 5-membered transition state.
Historic references
References
Oxygen heterocycle forming reactions
Heterocycle forming reactions
Name reactions | Feist–Benary synthesis | [
"Chemistry"
] | 265 | [
"Name reactions",
"Ring forming reactions",
"Heterocycle forming reactions",
"Organic reactions"
] |
3,097,300 | https://en.wikipedia.org/wiki/Reissert%20reaction | The Reissert reaction is a series of chemical reactions that transforms quinoline to quinaldic acid. Quinolines will react with acid chlorides and potassium cyanide to give 1-acyl-2-cyano-1,2-dihydroquinolines, also known as Reissert compounds. Hydrolysis gives the desired quinaldic acid.
The Reissert reaction is also successful with isoquinolines and most pyridines.
Several reviews have been published.
References
Weinstock, J.; Boekelheide, V. Organic Syntheses, Coll. Vol. 4, p. 641 (1963); Vol. 38, p. 58 (1958). (Article)
Uff, B. C.; Kershaw, J. R.; Neumeyer, J. L. Organic Syntheses, Coll. Vol. 6, p. 115 (1988); Vol. 56, p. 19 (1977). (Article)
Mosettig, E. Org. React. 1954, 8, 220. (Review)
(Review)
(Review)
Further reading
Addition reactions
Name reactions | Reissert reaction | [
"Chemistry"
] | 242 | [
"Name reactions"
] |
3,097,635 | https://en.wikipedia.org/wiki/Amphotropism | Amphotropism or amphotropic' indicates that a pathogen or parasite like a virus or a bacterium has a wide host range and can infect more than one species or cell culture line. The range is often of a mammalian spread. Amphotropism can be most effectively described in comparison to ecotropic and pantropic pathogens.
Distinctions and Functionality
Amphotropic pathogens are able to affect a relatively wide range of species by having their envelope glycoproteins attack receptors that, due to evolutionary conservation, are structurally similar across species. By exploiting these similarities they are able to extend their range beyond typical ecotropic pathogens, which are only able to identify and attack a specific receptor. However, their range is not as wide as pantropic pathogens, which aren’t reliant on structural similarities to bind.
Amphotropic Virus Examples
Amphotropic Murine Leukemia Virus
Coxiella burnetii
Chlamydia
See also
Tropism, a list of tropisms
Ecotropism, indicating a narrow host range
References
External links
Ecology terminology | Amphotropism | [
"Biology"
] | 228 | [
"Ecology terminology"
] |
3,098,397 | https://en.wikipedia.org/wiki/Molecular%20laser%20isotope%20separation | Molecular laser isotope separation (MLIS) is a method of isotope separation, where specially tuned lasers are used to separate isotopes of uranium using selective ionization of hyperfine transitions of uranium hexafluoride molecules. It is similar to AVLIS. Its main advantage over AVLIS is low energy consumption and use of uranium hexafluoride instead of vaporized uranium. MLIS was conceived in 1971 at the Los Alamos National Laboratory.
MLIS operates in cascade setup, like the gaseous diffusion process. Instead of vaporized uranium as in AVLIS the working medium of the MLIS is uranium hexafluoride which requires a much lower temperature to vaporize. The UF6 gas is mixed with a suitable carrier gas (a noble gas including some hydrogen) which allows the molecules to remain in the gaseous phase after being cooled by expansion through a supersonic de Laval nozzle. A scavenger gas (e.g. methane) is also included in the mixture to bind with the fluorine atoms after they are dissociated from the UF6 and inhibit their recombination with the enriched UF5 product.
In the first stage, the expanded and cooled stream of UF6 is irradiated with an infrared laser operating at the wavelength of 16 μm. The mix is then irradiated with another laser, either infrared or ultraviolet, whose photons are selectively absorbed by the excited 235UF6, causing its photolysis to 235UF5 and fluorine. The resultant enriched UF5 forms a solid which is then separated from the gas by filtration or a cyclone separator. The precipitated UF5 is relatively enriched with 235UF5 and after conversion back to UF6 it is fed to the next stage of the cascade to be further enriched.
The laser for the excitation is usually a carbon dioxide laser with output wavelength shifted from 10.6 μm to 16 μm; the photolysis laser may be a excimer laser operating at 308 nm, however, infrared lasers are mostly used in existing implementations.
The process is complex: many mixed UFx compounds are formed which contaminate the product and are difficult to remove. The United States, France, United Kingdom, Germany and South Africa have reported the termination of their MLIS programs; however, Japan still has a small-scale program in operation.
The Commonwealth Scientific and Industrial Research Organisation in Australia has developed the SILEX pulsed laser separation process. GE, Cameco and Hitachi are currently involved in developing it for commercial use.
See also
Atomic vapor laser isotope separation
Australian Atomic Energy Commission
Calutron
Nuclear fuel cycle
Nuclear power
References
External links
Laser isotope separation uranium enrichment
Reed J. Jenson, O’Dean P. Judd, and J. Allan Sullivan Separating Isotopes with Lasers Los Alamos Science vol.4, 1982.
Article in New York Times (August 20, 2011) regarding General Electric's plans to build a commercial laser enrichment facility in Wilmington, North Carolina, USA.
Silex information
Chemical processes
Isotope separation
Uranium | Molecular laser isotope separation | [
"Chemistry"
] | 630 | [
"Chemical process engineering",
"Chemical processes",
"nan"
] |
3,098,704 | https://en.wikipedia.org/wiki/Chemical%20oxygen%20iodine%20laser | A chemical oxygen iodine laser (COIL) is a near–infrared chemical laser. As the beam is infrared, it cannot be seen with the naked eye. It is capable of output power scaling up to megawatts in continuous mode. Its output wavelength is 1315 nm, a transition wavelength of atomic iodine.
Principles of operation
The laser is fed with gaseous chlorine, molecular iodine, and an aqueous mixture of hydrogen peroxide and potassium hydroxide. The aqueous peroxide solution undergoes chemical reaction with chlorine, producing heat, potassium chloride, and oxygen in excited state, singlet delta oxygen. Spontaneous transition of excited oxygen to the triplet sigma ground state is forbidden giving the excited oxygen a spontaneous lifetime of about 45 minutes. This allows the singlet oxygen to transfer its energy to the iodine atoms present in the gas stream;the atomic transition 2P3/2 to 2P1/2 in atomic iodine is nearly resonant with the singlet oxygen, so the energy transfer during the collision of the particles is rapid. The excited iodine atoms 2P1/2 then undergoes stimulated emission and lases at 1.315 μm in the optical resonator region of the laser. ( the upper and lower iodine atomic states are reversed with the 2P1/2 being the upper state)
The laser operates at relatively low gas pressures, but the gas flow has to be nearing the speed of sound at the reaction time; even supersonic flow designs are described. The low pressure and fast flow make removal of heat from the lasing medium easy, in comparison with high-power solid-state lasers. The reaction products are potassium chloride, water, and oxygen. Traces of chlorine and iodine are removed from the exhaust gases by a halogen scrubber.
History and applications
COIL was developed by the US Air Force in 1977, for military purposes. However, its properties make it useful for industrial processing as well; the beam is focusable and can be transferred by an optical fiber, as its wavelength is not absorbed much by fused silica but is well absorbed by metals, making it suitable for laser cutting and drilling. Rapid cutting of stainless steel and hastelloy with a fiber-coupled COIL has been demonstrated. In 1996, TRW Incorporated managed to get a continuous beam of hundreds of kilowatts of power that lasted for several seconds.
RADICL, Research Assessment, Device Improvement Chemical Laser, is a 20 kW COIL laser tested by the United States Air Force in around 1998.
COIL is a component of the United States' military airborne laser and advanced tactical laser programs. On February 11, 2010, this weapon was successfully deployed to shoot down a missile off the central California coast in a test conducted with a laser aboard a Boeing 747 that took off from the Point Mugu Naval Air Warfare Center (for more details, see Boeing YAL-1).
Other iodine based lasers
All gas-phase iodine laser (AGIL) is a similar construction using all-gas reagents, more suitable for aerospace applications.
The ElectricOIL, or EOIL, offers the same iodine lasing species in an alternate gas-electric hybrid variant.
See also
Peresvet (laser weapon)
List of laser articles
References
External links
Popular Science: The Flying Laser Cannon
Patent for the 'High energy airborne chemical oxygen iodine laser (COIL)'
'Laser jumbo' testing moves ahead
Chemical lasers
American inventions | Chemical oxygen iodine laser | [
"Chemistry"
] | 706 | [
"Chemical reaction engineering",
"Chemical lasers"
] |
3,098,816 | https://en.wikipedia.org/wiki/Runtime%20verification | Runtime verification is a computing system analysis and execution approach based on extracting information from a running system and using it to detect and possibly react to observed behaviors satisfying or violating certain properties. Some very particular properties, such as datarace and deadlock freedom, are typically desired to be satisfied by all systems and may be best implemented algorithmically. Other properties can be more conveniently captured as formal specifications. Runtime verification specifications are typically expressed in trace predicate formalisms, such as finite-state machines, regular expressions, context-free patterns, linear temporal logics, etc., or extensions of these. This allows for a less ad-hoc approach than normal testing. However, any mechanism for monitoring an executing system is considered runtime verification, including verifying against test oracles and reference implementations . When formal requirements specifications are provided, monitors are synthesized from them and infused within the system by means of instrumentation. Runtime verification can be used for many purposes, such as security or safety policy monitoring, debugging, testing, verification, validation, profiling, fault protection, behavior modification (e.g., recovery), etc. Runtime verification avoids the complexity of traditional formal verification techniques, such as model checking and theorem proving, by analyzing only one or a few execution traces and by working directly with the actual system, thus scaling up relatively well and giving more confidence in the results of the analysis (because it avoids the tedious and error-prone step of formally modelling the system), at the expense of less coverage. Moreover, through its reflective capabilities runtime verification can be made an integral part of the target system, monitoring and guiding its execution during deployment.
History and context
Checking formally or informally specified properties against executing systems or programs is an old topic (notable examples are dynamic typing in software, or fail-safe devices or watchdog timers in hardware), whose precise roots are hard to identify. The terminology runtime verification was formally introduced as the name of a 2001 workshop aimed at addressing problems at the boundary between formal verification and testing. For large code bases, manually writing test cases turns out to be very time consuming. In addition, not all errors can be detected during development. Early contributions to automated verification were made at the NASA Ames Research Center by Klaus Havelund and Grigore Rosu to archive high safety standards in spacecraft, rovers and avionics technology. They proposed a tool to verify specifications in temporal logic and to detect race conditions and deadlocks in Java programs by analyzing single execution paths.
Currently, runtime verification techniques are often presented with various alternative names, such as runtime monitoring, runtime checking, runtime reflection, runtime analysis, dynamic analysis, runtime/dynamic symbolic analysis, trace analysis, log file analysis, etc., all referring to instances of the same high-level concept applied either to different areas or by scholars from different communities. Runtime verification is intimately related to other well-established areas, such as testing (particularly model-based testing) when used before deployment and fault-tolerant systems when used during deployment.
Within the broad area of runtime verification, one can distinguish several categories, such as:
"specification-less" monitoring that targets a fixed set of mostly concurrency-related properties such as atomicity. The pioneering work in this area is by Savage et al. with the Eraser algorithm
monitoring with respect to temporal logic specifications; early contributions in this direction has been made by Lee, Kannan, and their collaborators, and Havelund and Rosu,.
Basic approaches
The broad field of runtime verification methods can be classified by three dimensions:
The system can be monitored during the execution itself (online) or after the execution e.g. in form of log analysis (offline).
The verifying code is integrated into the system (as done in Aspect-oriented Programming) or is provided as an external entity.
The monitor can report violation or validation of the desired specification.
Nevertheless, the basic process in runtime verification remains similar:
A monitor is created from some formal specification. This process usually can be done automatically if there are equivalent automata for the formulas of the formal language the property is specified in. To transform a regular expression, a finite-state machine can be used; a property in linear temporal logic can be transformed into a Büchi automaton (see also Linear temporal logic to Büchi automaton).
The system is instrumented to send events concerning its execution state to the monitor.
The system is executed and gets verified by the monitor.
The monitor verifies the received event trace and produces a verdict whether the specification is satisfied. Additionally, the monitor sends feedback to the system to possibly correct false behaviour. When using offline monitoring the system of cause cannot receive any feedback, as the verification is done at a later point in time.
Examples
The examples below discuss some simple properties that have been considered, possibly with small variations, by several runtime verification groups by the time of this writing (April 2011). To make them more interesting, each property below uses a different specification formalism and all of them are parametric. Parametric properties are properties about traces formed with parametric events, which are events that bind data to parameters. Here a parametric property has the form , where is a specification in some appropriate formalism referring to generic (uninstantiated) parametric events. The intuition for such parametric properties is that the property expressed by must hold for all parameter instances encountered (through parametric events) in the observed trace. None of the following examples are specific to any particular runtime verification system, though support for parameters is obviously needed. In the following examples Java syntax is assumed, thus "==" is logical equality, while "=" is assignment. Some methods (e.g., update() in the UnsafeEnumExample) are dummy methods, which are not part of the Java API, that are used for clarity.
HasNext
The Java Iterator interface requires that the hasNext() method be called and return true before the next() method is called. If this
does not occur, it is very possible that a user will iterate "off the end of" a Collection. The figure to the right shows a finite-state machine that defines a possible monitor for checking and enforcing this property with runtime verification. From the unknown state, it is always an error to call the next() method because such an operation could be unsafe. If hasNext() is called and returns true, it is safe to call next(), so the monitor enters the more state. If, however, the hasNext() method returns false, there are no more elements, and the monitor enters the none state. In the more and none states, calling the hasNext() method provides no new information. It is safe to call the next() method from the more state, but it becomes unknown if more elements exist, so the monitor reenters the initial unknown state. Finally, calling the next() method from the none state results in entering the error state. What follows is a representation of this property using parametric past time linear temporal logic.
This formula says that any call to the next() method must be immediately preceded by a call to hasNext() method that returns true. The property here is parametric in the Iterator i. Conceptually, this means that there will be one copy of the monitor for each possible Iterator in a test program, although runtime verification systems need not implement their parametric monitors this way. The monitor for this property would be set to trigger a handler when the formula is violated (equivalently when the finite-state machine enters the error state), which will occur when either next() is called without first calling hasNext(), or when hasNext() is called before next(), but returned false.
UnsafeEnum
The Vector class in Java has two means for iterating over its elements. One may use the Iterator interface, as seen in the previous example, or one may use the Enumeration interface. Besides the addition of a remove method for the Iterator interface, the main difference is that Iterator is "fail fast" while Enumeration is not. What this means is that if one modifies the Vector (other than by using the Iterator remove method) when one is iterating over the Vector using an Iterator, a ConcurrentModificationException is thrown. However, when using an Enumeration this is not a case, as mentioned. This can result in non-deterministic results from a program because the Vector is left in an inconsistent state from the perspective of the Enumeration. For legacy programs that still use the Enumeration interface, one may wish to enforce that Enumerations are not used when their underlying Vector is modified. The following parametric regular pattern can be used to enforce this behavior:
∀ Vector v, Enumeration e: (e = v.elements()) (e.nextElement())* v.update() e.nextElement()
This pattern is parametric in both the Enumeration and the Vector. Intuitively, and as above runtime verification systems need not implement their parametric monitors this way, one may think of the parametric monitor for this property as creating and keeping track of a non-parametric monitor instance for each possible pair of Vector and Enumeration. Some events may concern several monitors at the same time, such as v.update(), so the runtime verification system must (again conceptually) dispatch them to all interested monitors. Here the property is specified so that it states the bad behaviors of the program. This property, then, must be monitored for the match of the pattern. The figure to the right shows Java code that matches this pattern, thus violating the property. The Vector, v, is updated after the Enumeration, e, is created, and e is then used.
SafeLock
The previous two examples show finite state properties, but properties used in runtime verification may be much more complex. The SafeLock property enforces the policy that the number of acquires and releases of a (reentrant) Lock class are matched within a given method call. This, of course, disallows release of Locks in methods other than the ones that acquire them, but this is very possibly a desirable goal for the tested system to achieve. Below is a specification of this property using a parametric context-free pattern:
∀ Thread t, Lock l: S→ε | S begin(t) S end(t) | S l.acquire(t) S l.release(t)
The pattern specifies balanced sequences of nested begin/end and acquire/release pairs for each Thread and Lock ( is the empty sequence). Here begin and end refer to the begin and end of every method in the program (except the calls to acquire and release themselves). They are parametric in the Thread because it is necessary to associate the beginning and end of methods if and only if they belong to the same Thread. The acquire and release events are also parametric in the Thread for the same reason. They are, additionally, parametric in Lock because we do not wish to associate the releases of one Lock with the acquires of another. In the extreme, it is possible that there will be an instance of the property, i.e., a copy of the context-free parsing mechanism, for each possible combination of Thread with Lock; this happens, again, intuitively, because runtime verification systems may implement the same functionality differently. For example, if a system has Threads , , and with Locks and , then it is possible to have to maintain property instances for the pairs <,>, <,>, <,>, <,>, <,>, and <,>. This property should be monitored for failures to match the pattern because the pattern specified correct behavior. The figure to the right shows a trace that produces two violations of this property. The steps down in the figure represent the beginning of a method, while the steps up are the end. The grey arrows in the figure show the matching between given acquires and releases of the same Lock. For simplicity, the trace shows only one Thread and one Lock.
Research challenges and applications
Most of the runtime verification research addresses one or more of the topics listed below.
Reducing runtime overhead
Observing an executing system typically incurs some runtime overhead (hardware monitors may make an exception). It is important to reduce the overhead of runtime verification tools as much as possible, particularly when the generated monitors are deployed with the system. Runtime overhead reducing techniques include:
Improved instrumentation. Extracting events from the executing system and sending them to monitors can generate a large runtime overhead if done naively. Good system instrumentation is critical for any runtime verification tool, unless the tool explicitly targets existing execution logs. There are many instrumentation approaches in current use, each with its advantages and disadvantages, ranging from custom or manual instrumentation, to specialized libraries, to compilation into aspect-oriented languages, to augmenting the virtual machine, to building upon hardware support.
Combination with static analysis. A common combination of static and dynamic analyses, particularly encountered in compilers, is to monitor all the requirements that cannot be discharged statically. A dual and ultimately equivalent approach tends to become the norm in runtime verification, namely to use static analysis to reduce the amount of otherwise exhaustive monitoring. Static analysis can be performed both on the property to monitor and on the system to be monitored. Static analysis of the property to monitor can reveal that certain events are unnecessary to monitor, that the creation of certain monitors can be delayed, and that certain existing monitors will never trigger and thus can be garbage collected. Static analysis of the system to monitor can detect code that can never influence the monitors. For example, when monitoring the HasNext property above, one needs not instrument portions of code where each call i.next() is immediately preceded on any path by a call i.hasnext() that returns true (visible on the control-flow graph).
Efficient monitor generation and management. When monitoring parametric properties like the ones in the examples above, the monitoring system needs to keep track of the status of the monitored property with respect to each parameter instance. The number of such instances is theoretically unbounded and tends to be enormous in practice. An important research challenge is how to efficiently dispatch observed events to precisely those instances that need them. A related challenge is how to keep the number of such instances small (so that dispatching is faster), or in other words, how to avoid creating unnecessary instances for as long as possible and, dually, how to remove already created instances as soon as they become unnecessary. Finally, parametric monitoring algorithms typically generalize similar algorithms for generating non-parametric monitors. Thus, the quality of the generated non-parametric monitors dictates the quality of the resulting parametric monitors. However, unlike in other verification approaches (e.g., model checking), the number of states or the size of the generated monitor is less important in runtime verification; in fact, some monitors can have infinitely many states, such as the one for the SafeLock property above, although at any point in time only a finite number of states may have occurred. What is important is how efficiently the monitor transits from a state to its next state when it receives an event from the executing system.
Specifying properties
One of the major practical impediments of all formal approaches is that their users are reluctant to, or don't know and don't want to learn how to read or write specifications. In some cases the specifications are implicit, such as those for deadlocks and data-races, but in most cases they need to be produced. An additional inconvenience, particularly in the context of runtime verification, is that many existing specification languages are not expressive enough to capture the intended properties.
Better formalisms. A significant amount of work in the runtime verification community has been put into designing specification formalisms that fit the desired application domains for runtime verification better than the conventional specification formalisms. Some of these consist of slight or no syntactic changes to the conventional formalisms, but only of changes to their semantics (e.g., finite trace versus infinite trace semantics, etc.) and to their implementation (optimized finite-state machines instead of Büchi automata, etc.). Others extend existing formalisms with features that are amenable for runtime verification but may not easily be for other verification approaches, such as adding parameters, as seen in the examples above. Finally, there are specification formalisms that have been designed specifically for runtime verification, attempting to achieve their best for this domain and caring little about other application domains. Designing universally better or domain-specifically better specification formalisms for runtime verification is and will continue to be one of its major research challenges.
Quantitative properties. Compared to other verification approaches, runtime verification is able to operate on concrete values of system state variables, which makes it possible to collect statistical information about the program execution and use this information to assess complex quantitative properties. More expressive property languages that will allow us to fully utilize this capability are needed.
Better interfaces. Reading and writing property specifications is not easy for non-experts. Even experts often stare for minutes at relatively small temporal logic formulae (particularly when they have nested "until" operators). An important research area is to develop powerful user interfaces for various specification formalisms that would allow users to more easily understand, write and maybe even visualize properties.
Mining specifications. No matter what tool support is available to help users produce specifications, they will almost always be more pleased to have to write no specifications at all, particularly when they are trivial. Fortunately, there are plenty of programs out there making supposedly correct use of the actions/events that one wants to have properties about. If that is the case, then it is conceivable that one would like to make use of those correct programs by automatically learning from them the desired properties. Even if the overall quality of the automatically mined specifications is expected to be lower than that of manually produced specifications, they can serve as a start point for the latter or as the basis for automatic runtime verification tools aimed specifically at finding bugs (where a poor specification turns into false positives or negatives, often acceptable during testing).
Execution models and predictive analysis
The capability of a runtime verifier to detect errors strictly depends on its capability to analyze execution traces. When the monitors are deployed with the system, instrumentation is typically minimal and the execution traces are as simple as possible to keep the runtime overhead low. When runtime verification is used for testing, one can afford more comprehensive instrumentations that augment events with important system information that can be used by the monitors to construct and therefore analyze more refined models of the executing system. For example, augmenting events with Vector clock information and with data and control flow information allows the monitors to construct a causal model of the running system in which the observed execution was only one possible instance. Any other permutation of events that is consistent with the model is a feasible execution of the system, which could happen under a different thread interleaving. Detecting property violations in such inferred executions (by monitoring them) makes the monitor predict errors that did not happen in the observed execution, but which can happen in another execution of the same system. An important research challenge is to extract models from execution traces that comprise as many other execution traces as possible.
Behavior modification
Unlike testing or exhaustive verification, runtime verification holds the promise to allow the system to recover from detected violations, through reconfiguration, micro-resets, or through finer intervention mechanisms sometimes referred to as tuning or steering. Implementation of these techniques within the rigorous framework of runtime verification gives rise to additional challenges.
Specification of actions. One needs to specify the modification to be performed in an abstract enough fashion that does not require the user to know irrelevant implementation details. In addition, when such a modification can take place needs to be specified in order to maintain the integrity of the system.
Reasoning about intervention effects. It is important to know that an intervention improves the situation, or at least does not make the situation worse.
Action interfaces. Similar to the instrumentation for monitoring, we need to enable the system to receive action invocations. Invocation mechanisms are by necessity going to be dependent on the implementation details of the system. However, at the specification level, we need to provide the user with a declarative way of providing feedback to the system by specifying what actions should be applied when under what conditions.
Related work
Aspect-oriented programming
Researchers in Runtime Verification recognized the potential for using Aspect-oriented Programming as a technique for defining program instrumentation in a modular way. Aspect-oriented programming (AOP) generally promotes the modularization of crosscutting concerns. Runtime Verification naturally is one such concern and can hence benefit from certain properties of AOP. Aspect-oriented monitor definitions are largely declarative, and hence tend to be simpler to reason about than instrumentation expressed through a program transformation written in an imperative programming language. Further, static analyses can reason about monitoring aspects more easily than about other forms of program instrumentation, as all instrumentation is contained within a single aspect. Many current runtime verification tools are hence built in the form of specification compilers, that take an expressive high-level specification as input and produce as output code written in some Aspect-oriented programming language (such as AspectJ).
Combination with formal verification
Runtime verification, if used in combination with provably correct recovery code, can provide an invaluable infrastructure for program verification, which can significantly lower the latter's complexity. For example, formally verifying heap-sort algorithm is very challenging. One less challenging technique to verify it is to monitor its output to be sorted (a linear complexity monitor) and, if not sorted, then sort it using some easily verifiable procedure, say insertion sort. The resulting sorting program is now more easily verifiable, the only thing being required from heap-sort is that it does not destroy the original elements regarded as a multiset, which is much easier to prove. Looking at from the other direction, one can use formal verification to reduce the overhead of runtime verification, as already mentioned above for static analysis instead of formal verification. Indeed, one can start with a fully runtime verified, but probably slow program. Then one can use formal verification (or static analysis) to discharge monitors, same way a compiler uses static analysis to discharge runtime checks of type correctness or memory safety.
Increasing coverage
Compared to the more traditional verification approaches, an immediate disadvantage of runtime verification is its reduced coverage. This is not problematic when the runtime monitors are deployed with the system (together with appropriate recovery code to be executed when the property is violated), but it may limit the effectiveness of runtime verification when used to find errors in systems. Techniques to increase the coverage of runtime verification for error detection purposes include:
Input generation. It is well known that generating a good set of inputs (program input variable values, system call values, thread schedules, etc.) can enormously increase the effectiveness of testing. That holds true for runtime verification used for error detection, too, but in addition to using the program code to drive the input generation process, in runtime verification one can also use the property specifications, when available, and can also use monitoring techniques to induce desired behaviors. This use of runtime verification makes it closely related to model-based testing, although the runtime verification specifications are typically general purpose, not necessarily crafted for testing reasons. Consider, for example, that one wants to test the general-purpose UnsafeEnum property above. Instead of just generating the above-mentioned monitor to passively observe the system execution, one can generate a smarter monitor that freezes the thread attempting to generate the second e.nextElement() event (right before it generates it), letting the other threads execute in a hope that one of them may generate a v.update() event, in which case an error has been found.
Dynamic symbolic execution. In symbolic execution programs are executed and monitored symbolically, that is, without concrete inputs. One symbolic execution of the system may cover a large set of concrete inputs. Off-the-shelf constraint solving or satisfiability checking techniques are often used to drive symbolic executions or to systematically explore their space. When the underlying satisfiability checkers cannot handle a choice point, then a concrete input can be generated to pass that point; this combination of concrete and symbolic execution is also referred to as concolic execution.
See also
Dynamic program analysis
Profiling (computer programming)
Runtime error detection
Runtime application self-protection (RASP)
References
Formal methods
Logic in computer science | Runtime verification | [
"Mathematics",
"Engineering"
] | 5,125 | [
"Software engineering",
"Mathematical logic",
"Logic in computer science",
"Formal methods"
] |
3,099,367 | https://en.wikipedia.org/wiki/Diversity%20index | A diversity index is a method of measuring how many different types (e.g. species) there are in a dataset (e.g. a community). Some more sophisticated indices also account for the phylogenetic relatedness among the types. Diversity indices are statistical representations of different aspects of biodiversity (e.g. richness, evenness, and dominance), which are useful simplifications for comparing different communities or sites.
Effective number of species or Hill numbers
When diversity indices are used in ecology, the types of interest are usually species, but they can also be other categories, such as genera, families, functional types, or haplotypes. The entities of interest are usually individual organisms (e.g. plants or animals), and the measure of abundance can be, for example, number of individuals, biomass or coverage. In demography, the entities of interest can be people, and the types of interest various demographic groups. In information science, the entities can be characters and the types of the different letters of the alphabet. The most commonly used diversity indices are simple transformations of the effective number of types (also known as 'true diversity'), but each diversity index can also be interpreted in its own right as a measure corresponding to some real phenomenon (but a different one for each diversity index).
Many indices only account for categorical diversity between subjects or entities. Such indices, however do not account for the total variation (diversity) that can be held between subjects or entities which occurs only when both categorical and qualitative diversity are calculated.
True diversity, or the effective number of types, refers to the number of equally abundant types needed for the average proportional abundance of the types to equal that observed in the dataset of interest (where all types may not be equally abundant). The true diversity in a dataset is calculated by first taking the weighted generalized mean of the proportional abundances of the types in the dataset, and then taking the reciprocal of this. The equation is:
The denominator equals the average proportional abundance of the types in the dataset as calculated with the weighted generalized mean with exponent . In the equation, is richness (the total number of types in the dataset), and the proportional abundance of the th type is . The proportional abundances themselves are used as the nominal weights. The numbers are called Hill numbers of order q or effective number of species.
When , the above equation is undefined. However, the mathematical limit as approaches 1 is well defined and the corresponding diversity is calculated with the following equation:
which is the exponential of the Shannon entropy calculated with natural logarithms (see above). In other domains, this statistic is also known as the perplexity.
The general equation of diversity is often written in the form
and the term inside the parentheses is called the basic sum. Some popular diversity indices correspond to the basic sum as calculated with different values of .
Sensitivity of the diversity value to rare vs. abundant species
The value of is often referred to as the order of the diversity. It defines the sensitivity of the true diversity to rare vs. abundant species by modifying how the weighted mean of the species' proportional abundances is calculated. With some values of the parameter , the value of the generalized mean assumes familiar kinds of weighted means as special cases. In particular,
corresponds to the weighted harmonic mean,
to the weighted geometric mean, and
to the weighted arithmetic mean.
As approaches infinity, the weighted generalized mean with exponent approaches the maximum value, which is the proportional abundance of the most abundant species in the dataset.
Generally, increasing the value of increases the effective weight given to the most abundant species. This leads to obtaining a larger value and a smaller true diversity () value with increasing .
When , the weighted geometric mean of the values is used, and each species is exactly weighted by its proportional abundance (in the weighted geometric mean, the weights are the exponents). When , the weight given to abundant species is exaggerated, and when , the weight given to rare species is. At , the species weights exactly cancel out the species proportional abundances, such that the weighted mean of the values equals even when all species are not equally abundant. At , the effective number of species, , hence equals the actual number of species . In the context of diversity, is generally limited to non-negative values. This is because negative values of would give rare species so much more weight than abundant ones that would exceed .
Richness
Richness simply quantifies how many different types the dataset of interest contains. For example, species richness (usually noted ) is simply the number of species, e.g. at a particular site. Richness is a simple measure, so it has been a popular diversity index in ecology, where abundance data are often not available. If true diversity is calculated with , the effective number of types () equals the actual number of types, which is identical to Richness ().
Shannon index
The Shannon index has been a popular diversity index in the ecological literature, where it is also known as Shannon's diversity index, Shannon–Wiener index, and (erroneously) Shannon–Weaver index. The measure was originally proposed by Claude Shannon in 1948 to quantify the entropy (hence Shannon entropy, related to Shannon information content) in strings of text. The idea is that the more letters there are, and the closer their proportional abundances in the string of interest, the more difficult it is to correctly predict which letter will be the next one in the string. The Shannon entropy quantifies the uncertainty (entropy or degree of surprise) associated with this prediction. It is most often calculated as follows:
where is the proportion of characters belonging to the th type of letter in the string of interest. In ecology, is often the proportion of individuals belonging to the th species in the dataset of interest. Then the Shannon entropy quantifies the uncertainty in predicting the species identity of an individual that is taken at random from the dataset.
Although the equation is here written with natural logarithms, the base of the logarithm used when calculating the Shannon entropy can be chosen freely. Shannon himself discussed logarithm bases 2, 10 and , and these have since become the most popular bases in applications that use the Shannon entropy. Each log base corresponds to a different measurement unit, which has been called binary digits (bits), decimal digits (decits), and natural digits (nats) for the bases 2, 10 and , respectively. Comparing Shannon entropy values that were originally calculated with different log bases requires converting them to the same log base: change from the base to base is obtained with multiplication by .
The Shannon index ({{math|H'''}}) is related to the weighted geometric mean of the proportional abundances of the types. Specifically, it equals the logarithm of true diversity as calculated with :
This can also be written
which equals
Since the sum of the values equals 1 by definition, the denominator equals the weighted geometric mean of the values, with the values themselves being used as the weights (exponents in the equation). The term within the parentheses hence equals true diversity , and {{math|H}} equals .
When all types in the dataset of interest are equally common, all values equal , and the Shannon index hence takes the value . The more unequal the abundances of the types, the larger the weighted geometric mean of the values, and the smaller the corresponding Shannon entropy. If practically all abundance is concentrated to one type, and the other types are very rare (even if there are many of them), Shannon entropy approaches zero. When there is only one type in the dataset, Shannon entropy exactly equals zero (there is no uncertainty in predicting the type of the next randomly chosen entity).
In machine learning the Shannon index is also called as Information gain.
Rényi entropy
The Rényi entropy is a generalization of the Shannon entropy to other values of than 1. It can be expressed:
which equals
This means that taking the logarithm of true diversity based on any value of gives the Rényi entropy corresponding to the same value of .
Simpson index
The Simpson index was introduced in 1949 by Edward H. Simpson to measure the degree of concentration when individuals are classified into types. The same index was rediscovered by Orris C. Herfindahl in 1950. The square root of the index had already been introduced in 1945 by the economist Albert O. Hirschman. As a result, the same measure is usually known as the Simpson index in ecology, and as the Herfindahl index or the Herfindahl–Hirschman index (HHI) in economics.
The measure equals the probability that two entities taken at random from the dataset of interest represent the same type. It equals:
where is richness (the total number of types in the dataset). This equation is also equal to the weighted arithmetic mean of the proportional abundances of the types of interest, with the proportional abundances themselves being used as the weights. Proportional abundances are by definition constrained to values between zero and one, but it is a weighted arithmetic mean, hence , which is reached when all types are equally abundant.
By comparing the equation used to calculate λ with the equations used to calculate true diversity, it can be seen that equals , i.e., true diversity as calculated with . The original Simpson's index hence equals the corresponding basic sum.
The interpretation of λ as the probability that two entities taken at random from the dataset of interest represent the same type assumes that the first entity is replaced to the dataset before taking the second entity. If the dataset is very large, sampling without replacement gives approximately the same result, but in small datasets, the difference can be substantial. If the dataset is small, and sampling without replacement is assumed, the probability of obtaining the same type with both random draws is:
where is the number of entities belonging to the th type and is the total number of entities in the dataset. This form of the Simpson index is also known as the Hunter–Gaston index in microbiology.
Since the mean proportional abundance of the types increases with decreasing number of types and increasing abundance of the most abundant type, λ obtains small values in datasets of high diversity and large values in datasets of low diversity. This is counterintuitive behavior for a diversity index, so often, such transformations of λ that increase with increasing diversity have been used instead. The most popular of such indices have been the inverse Simpson index (1/λ) and the Gini–Simpson index (1 − λ). Both of these have also been called the Simpson index in the ecological literature, so care is needed to avoid accidentally comparing the different indices as if they were the same.
Inverse Simpson index
The inverse Simpson index equals:
This simply equals true diversity of order 2, i.e. the effective number of types that is obtained when the weighted arithmetic mean is used to quantify average proportional abundance of types in the dataset of interest.
The index is also used as a measure of the effective number of parties.
Gini–Simpson index
The Gini-Simpson Index is also called Gini impurity, or Gini's diversity index' in the field of Machine Learning. The original Simpson index λ equals the probability that two entities taken at random from the dataset of interest (with replacement) represent the same type. Its transformation 1 − λ, therefore, equals the probability that the two entities represent different types. This measure is also known in ecology as the probability of interspecific encounter (PIE'') and the Gini–Simpson index. It can be expressed as a transformation of the true diversity of order 2:
The Gibbs–Martin index of sociology, psychology, and management studies, which is also known as the Blau index, is the same measure as the Gini–Simpson index.
The quantity is also known as the expected heterozygosity in population genetics.
Berger–Parker index
The Berger–Parker index, named after Wolfgang H. Berger and Frances Lawrence Parker, equals the maximum value in the dataset, i.e., the proportional abundance of the most abundant type. This corresponds to the weighted generalized mean of the values when approaches infinity, and hence equals the inverse of the true diversity of order infinity ().
See also
References
Further reading
See chapter 5 for an elaboration of coding procedures described informally above.
External links
Simpson's Diversity index
Diversity indices gives some examples of estimates of Simpson's index for real ecosystems.
Measurement of biodiversity
Index numbers
Summary statistics for categorical data | Diversity index | [
"Mathematics",
"Biology"
] | 2,580 | [
"Measurement of biodiversity",
"Mathematical objects",
"Biodiversity",
"Index numbers",
"Numbers"
] |
3,099,755 | https://en.wikipedia.org/wiki/Ostwald%E2%80%93Freundlich%20equation | The Ostwald–Freundlich equation governs boundaries between two phases; specifically, it relates the surface tension of the boundary to its curvature, the ambient temperature, and the vapor pressure or chemical potential in the two phases.
The Ostwald–Freundlich equation for a droplet or particle with radius is:
= atomic volume
= Boltzmann constant
= surface tension (J m−2)
= equilibrium partial pressure (or chemical potential or concentration)
= partial pressure (or chemical potential or concentration)
= absolute temperature
One consequence of this relation is that small liquid droplets (i.e., particles with a high surface curvature) exhibit a higher effective vapor pressure, since the surface is larger in comparison to the volume.
Another notable example of this relation is Ostwald ripening, in which surface tension causes small precipitates to dissolve and larger ones to grow. Ostwald ripening is thought to occur in the formation of orthoclase megacrysts in granites as a consequence of subsolidus growth. See rock microstructure for more.
History
In 1871, Lord Kelvin (William Thomson) obtained the following relation governing a liquid-vapor interface:
where:
= vapor pressure at a curved interface of radius
= vapor pressure at flat interface () =
= surface tension
= density of vapor
= density of liquid
, = radii of curvature along the principal sections of the curved interface.
In his dissertation of 1885, Robert von Helmholtz (son of the German physicist Hermann von Helmholtz) derived the Ostwald–Freundlich equation and showed that Kelvin's equation could be transformed into the Ostwald–Freundlich equation. The German physical chemist Wilhelm Ostwald derived the equation apparently independently in 1900; however, his derivation contained a minor error which the German chemist Herbert Freundlich corrected in 1909.
Derivation from Kelvin's equation
According to Lord Kelvin's equation of 1871,
If the particle is assumed to be spherical, then ; hence,
Note: Kelvin defined the surface tension as the work that was performed per unit area by the interface rather than on the interface; hence his term containing has a minus sign. In what follows, the surface tension will be defined so that the term containing has a plus sign.
Since , then ; hence,
Assuming that the vapor obeys the ideal gas law, then
where:
= mass of a volume of vapor
= molecular weight of vapor
= number of moles of vapor in volume of vapor
= Avogadro constant
= ideal gas constant =
Since is the mass of one molecule of vapor or liquid, then
volume of one molecule .
Hence
where .
Thus
Since
then
Since , then . If , then . Hence
Therefore
which is the Ostwald–Freundlich equation.
See also
Köhler theory
Kelvin equation
References
Thermodynamic equations
Petrology
Surface science | Ostwald–Freundlich equation | [
"Physics",
"Chemistry",
"Materials_science"
] | 587 | [
"Thermodynamic equations",
"Equations of physics",
"Surface science",
"Condensed matter physics",
"Thermodynamics"
] |
3,099,871 | https://en.wikipedia.org/wiki/Intelsat%20708 | Intelsat 708 was a telecommunications satellite built by the American company Space Systems/Loral for Intelsat. It was destroyed on 15 February 1996 when the Long March 3B launch vehicle failed while being launched from the Xichang Satellite Launch Center in China. The launch vehicle veered off course immediately after liftoff and struck a nearby village, killing at least six people.
The accident investigation identified a failure in the guidance system of the Long March 3B. After the Intelsat 708 accident, the Long March rockets did not experience another mission failure until 2011. However, the participation of American companies in the Intelsat 708 and Apstar 2 investigations caused political controversy in the United States. A U.S. government investigation found that the information in the report had been illegally transferred to China. Satellite technology was subsequently reclassified as a munition and placed under ITAR restrictions, blocking its export to China. In 2002, Space Systems/Loral paid to settle charges of violating export controls.
Launch failure
In 1992 and 1993, Space Systems/Loral received licenses from the United States Department of State to launch Intelsat satellites on Chinese rockets. At that time, satellite components were still under International Traffic in Arms Regulations (ITAR); they would be transferred in stages to the U.S. Department of Commerce between 1992 and 1996. The Intelsat 708 satellite was to be launched into geostationary orbit aboard a Long March 3B launch vehicle.
On 15 February 1996, the Long March 3B launch vehicle failed during launch, veering off course immediately after liftoff and crashing into a village near the launch site (probably Mayelin Village). An enormous explosion destroyed most of the rocket and killed an unknown number of inhabitants.
The nature and extent of the damage remain a subject of dispute. The Chinese government, through its official Xinhua news agency, reported that six people were killed and 57 injured. Western media speculated that between a few dozen and 500 people might have been killed in the crash; "dozens, if not hundreds" of people were seen to gather outside the centre's main gate near the crash site the night before launch. When reporters were being taken away from the site, they found that most buildings had sustained serious damage or had been flattened completely. Some eyewitnesses were noted as having seen dozens of ambulances and many flatbed trucks, loaded with what could have been human remains, being taken to the local hospital.
Bruce Campbell of Astrotech and other American eyewitnesses in Xichang reported that the satellite post-crash was surprisingly intact, along with the opinion that the official death toll only reflects those in the military who were caught by the disaster and not the civilian population. In the years to follow, the village that used to border the launch center has vanished, with little trace it ever existed. However, Chen Lan writing in The Space Review later said the total population of the village was under 1000, and that most if not all of the population had been evacuated before launch as had been common practice since the 1980s, making it "very unlikely" that there were hundreds of deaths.
Investigation
After the launch failure, the Chinese investigation found that the inertial measurement unit had failed. However, the satellite insurance companies insisted on an Independent Review Committee (IRC) as a condition of providing insurance for future Chinese satellite launches. Loral, Hughes, and other U.S. aerospace companies participated in the Review Committee, which issued a report in May 1996 that identified a different cause of the failure in the inertial measurement unit. The Chinese report was then changed to match the findings of the Review Committee. The Long March rocket family did not experience another mission failure until August 2011.
In 1997, the U.S. Defense Technology Security Administration found that China had obtained "significant benefit" from the Review Committee and could improve their "launch vehicles ... ballistic missiles and in particular their guidance systems". In 1998, the U.S. Congress reclassified satellite technology as a munition that was subject to ITAR, returning export control from the Commerce Department to the State Department. In 2002, Loral paid in fines and compliance expenses to settle allegations of violating export control regulations.
No export licenses to China have been issued since 1996, and an official at the Bureau of Industry and Security emphasized in 2016 that "no U.S.-origin content, regardless of significance, regardless of whether it's incorporated into a foreign-made item, can go to China".
Intelsat 708 contained sophisticated communications and encryption technology. Members of the Loral security team searched the toxic environment around the crash site to recover sensitive components, returning with complaints of bulging eyes and severe headaches requiring oxygen therapy. They were initially reported by the U.S. Department of Defense monitor to have succeeded in recovering "the [satellite's] encryption-decryption equipment". The most sensitive FAC-3R circuit boards were not recovered, but "were mounted near the hydrazine propellant tanks and most likely were destroyed in the explosion... Because the FAC-3R boards on Intelsat 708 were uniquely keyed, the National Security Agency (NSA) remains convinced that there is no risk to other satellite systems, now or in the future, resulting from having not recovering the FAC-3R boards from the PRC".
See also
Nedelin disaster – a launch catastrophe at the Baikonur test range in the Soviet Union.
Proton-M/DM-03 8K82 km/11S861-03 – a Proton launch vehicle that went out of control and flew horizontally before crashing.
References
(Congressional report discussing Intelsat 708 launch failure and possible technology transfer)
(Documents on Intelsat 708 and export controls, including State Department letter charging two companies with export law violations)
(Article on the crash of a rocket carrying a commercial payload on 15 February 1996)
(Chinese government report disputing conclusions of U.S. Congressional report)
External links
Raw footage of the disaster
Extra footage of the disaster (in YouTube)
Video of the launch, impact, and view of the resulting explosion (in YouTube)
Satellite launch failures
Space program fatalities
Spacecraft launched in 1996
Intelsat satellites
1996 in China
Satellites using the SSL 1300 bus
Space missions that ended in failure | Intelsat 708 | [
"Engineering"
] | 1,278 | [
"Space program fatalities",
"Space programs"
] |
3,099,896 | https://en.wikipedia.org/wiki/Gonality%20of%20an%20algebraic%20curve | In mathematics, the gonality of an algebraic curve C is defined as the lowest degree of a nonconstant rational map from C to the projective line. In more algebraic terms, if C is defined over the field K and K(C) denotes the function field of C, then the gonality is the minimum value taken by the degrees of field extensions
K(C)/K(f)
of the function field over its subfields generated by single functions f.
If K is algebraically closed, then the gonality is 1 precisely for curves of genus 0. The gonality is 2 for curves of genus 1 (elliptic curves) and for hyperelliptic curves (this includes all curves of genus 2). For genus g ≥ 3 it is no longer the case that the genus determines the gonality. The gonality of the generic curve of genus g is the floor function of
(g + 3)/2.
Trigonal curves are those with gonality 3, and this case gave rise to the name in general. Trigonal curves include the Picard curves, of genus three and given by an equation
y3 = Q(x)
where Q is of degree 4.
The gonality conjecture, of M. Green and R. Lazarsfeld, predicts that the gonality of the algebraic curve C can be calculated by homological algebra means, from a minimal resolution of an invertible sheaf of high degree. In many cases the gonality is two more than the Clifford index. The Green–Lazarsfeld conjecture is an exact formula in terms of the graded Betti numbers for a degree d embedding in r dimensions, for d large with respect to the genus. Writing b(C), with respect to a given such embedding of C and the minimal free resolution for its homogeneous coordinate ring, for the minimum index i for which βi, i + 1 is zero, then the conjectured formula for the gonality is
r + 1 − b(C).
According to the 1900 ICM talk of Federico Amodeo, the notion (but not the terminology) originated in Section V of Riemann's Theory of Abelian Functions. Amodeo used the term "gonalità" as early as 1893.
References
Geometric introduction to trigonal curves of genus five
Code for constructing examples of special trigonal curves on GitHub, written in Macaulay2
Algebraic curves
Homological algebra | Gonality of an algebraic curve | [
"Mathematics"
] | 507 | [
"Fields of abstract algebra",
"Mathematical structures",
"Category theory",
"Homological algebra"
] |
3,099,929 | https://en.wikipedia.org/wiki/Hydrogen%20fluoride%20laser | The hydrogen fluoride laser is an infrared chemical laser. It is capable of delivering continuous output power in the megawatt range.
Hydrogen fluoride lasers operate at the wavelength of 2.7–2.9 μm. This wavelength is absorbed by the atmosphere, effectively attenuating the beam and reducing its reach, unless used in a vacuum environment. However, when deuterium is used instead of hydrogen, the deuterium fluoride lases at the wavelength of about 3.8 μm. This makes the deuterium fluoride laser usable for terrestrial operations.
Deuterium fluoride laser
The deuterium fluoride laser constructionally resembles a rocket engine. In the combustion chamber, ethylene is burned in nitrogen trifluoride. This reaction produces free excited fluorine radicals. Just after the nozzle, the mixture of helium and hydrogen or deuterium gas is injected to the exhaust stream; the hydrogen or deuterium reacts with the fluorine radicals, producing excited molecules of deuterium fluoride or hydrogen fluoride. The excited molecules then undergo stimulated emission in the optical resonator region of the laser.
Deuterium fluoride lasers have found military applications: the MIRACL laser, the Pulsed energy projectile anti-personnel weapon, and the Tactical High Energy Laser are of the deuterium fluoride type.
Fusion
An Argentine-American physicist and accused spy, Leonardo Mascheroni, has proposed the idea of using hydrogen fluoride lasers to produce nuclear fusion.
References
Chemical lasers | Hydrogen fluoride laser | [
"Chemistry"
] | 320 | [
"Chemical reaction engineering",
"Chemical lasers"
] |
3,100,090 | https://en.wikipedia.org/wiki/Chemical%20laser | A chemical laser is a laser that obtains its energy from a chemical reaction. Chemical lasers can reach continuous wave output with power reaching to megawatt levels. They are used in industry for cutting and drilling.
Common examples of chemical lasers are the chemical oxygen iodine laser (COIL), all gas-phase iodine laser (AGIL), and the hydrogen fluoride (HF) and deuterium fluoride (DF) lasers, all operating in the mid-infrared region. There is also a DF–CO2 laser (deuterium fluoride–carbon dioxide), which, like COIL, is a "transfer laser." The HF and DF lasers are unusual, in that there are several molecular energy transitions with sufficient energy to cross the threshold required for lasing. Since the molecules do not collide frequently enough to re-distribute the energy, several of these laser modes operate either simultaneously, or in extremely rapid succession, so that an HF or DF laser appears to operate simultaneously on several wavelengths unless a wavelength selection device is incorporated into the resonator.
Origin of the CW chemical HF/DF laser
The possibility of the creation of infrared lasers based on the vibrationally excited products of a chemical reaction was first proposed by John Polanyi in 1961. A pulsed chemical laser was demonstrated by Jerome V. V. Kasper and George C. Pimentel in 1965. First, chlorine (Cl2) was vigorously photo-disassociated into atoms, which then reacted with hydrogen, yielding hydrogen chloride (HCl) in an excited state suitable for a laser. Then hydrogen fluoride (HF) and deuterium fluoride (DF) were demonstrated. Pimentel went on to explore a DF-CO2 transfer laser. Although this work did not produce a purely chemical continuous wave laser, it paved the way by showing the viability of the chemical reaction as a pumping mechanism for a chemical laser.
The continuous wave (CW) chemical HF laser was first demonstrated in 1969, and patented in 1972, by D. J. Spencer, T. A. Jacobs, H. Mirels and R. W. F. Gross at The Aerospace Corporation in El Segundo, California. This device used the mixing of adjacent streams of H2 and F, within an optical cavity, to create vibrationally-excited HF that lased. The atomic fluorine was provided by dissociation of SF6 gas using a DC electrical discharge. Later work at US Army, US Air Force, and US Navy contractor organizations (e.g. TRW) used a chemical reaction to provide the atomic fluorine, a concept included in the patent disclosure of Spencer et al. The latter configuration obviated the need for electrical power and led to the development of high-power lasers for military applications.
The analysis of the HF laser performance is complicated due to the need to simultaneously consider the fluid dynamic mixing of adjacent supersonic streams, multiple non-equilibrium chemical reactions and the interaction of the gain medium with the optical cavity. The researchers at The Aerospace Corporation developed the first exact analytic (flame sheet) solution, the first numerical computer code solution and the first simplified model describing CW HF chemical laser performance.
Chemical lasers stimulated the use of wave-optics calculations for resonator analysis. This work was pioneered by E. A. Sziklas (Pratt & Whitney) and A. E. Siegman (Stanford University). Part I of their work dealt with Hermite-Gaussian Expansion and has received little use compared with Part II, which dealt with the Fast Fourier transform method, which is now a standard tool at United Technologies Corporation, Lockheed Martin, SAIC, Boeing, tOSC, MZA (Wave Train), and OPCI. Most of these companies competed for contracts to build HF and DF lasers for DARPA, the US Air Force, the US Army, or the US Navy throughout the 1970s and 1980s. General Electric and Pratt & Whitney dropped out of the competition in the early 1980s leaving the field to Rocketdyne (now part of Pratt & Whitney - although the laser organization remains today with Boeing) and TRW (now part of Northrop Grumman).
Comprehensive chemical laser models were developed at SAIC by R. C. Wade, at TRW by C.-C. Shih, by D. Bullock and M. E. Lainhart, and at Rocketdyne by D. A. Holmes and T. R. Waite. Of these, perhaps the most sophisticated was the CROQ code at TRW, outpacing the early work at Aerospace Corporation.
Performance
The early analytical models coupled with chemical rate studies led to the design of efficient experimental CW HF laser devices at United Aircraft, and The Aerospace Corporation. Power levels up to 10 kW were achieved. DF lasing was obtained by the substitution of D2 for H2. A group at United Aircraft Research Laboratories produced a re-circulating chemical laser, which did not rely on the continuous consumption of chemical reactants.
The TRW Systems Group in Redondo Beach, California, subsequently received US Air Force contracts to build higher power CW HF/DF lasers. Using a scaled-up version of an Aerospace Corporation design, TRW achieved 100 kW power levels. General Electric, Pratt & Whitney, & Rocketdyne built various chemical lasers on company funds in anticipation of receiving DoD contracts to build even larger lasers. Only Rocketdyne received contracts of sufficient value to continue competing with TRW. TRW produced the MIRACL device for the U.S. Navy that achieved megawatt power levels. The latter is believed to be the highest power continuous laser, of any type, developed to date (2007).
TRW also produced a cylindrical chemical laser (the Alpha laser) for DARPA Zenith Star, which had the theoretical advantage of being scalable to even larger powers. However, by 1990, the interest in chemical lasers had shifted toward shorter wavelengths, and the chemical oxygen iodine laser (COIL) gained the most interest, producing radiation at 1.315 μm. There is a further advantage that the COIL laser generally produces single wavelength radiation, which is very helpful for forming a very well focused beam. This type of COIL laser is used today in the ABL (Airborne Laser, the laser itself being built by Northrop Grumman) and in the ATL (Advanced Tactical Laser) produced by Boeing. Meanwhile, a lower power HF laser was used for the THEL (Tactical High Energy Laser) built in the late 1990s for the Israeli Ministry of Defense in cooperation with the U.S. Army SMDC. It is the first fielded high energy laser to demonstrate effectiveness in fairly realistic tests against rockets and artillery. The MIRACL laser has demonstrated effectiveness against certain targets flown in front of it at White Sands Missile Range, but it is not configured for actual service as a fielded weapon. ABL was successful in shooting down several full sized missiles from significant ranges, and ATL was successful in disabling moving land vehicles and other tactical targets.
Despite the performance advantages of chemical lasers, the Department of Defense stopped all development of chemical laser systems with the termination of the Airborne Laser Testbed in 2012. The desire for a "renewable" power source, i.e. not having to supply unusual chemicals like fluorine, deuterium, basic hydrogen-peroxide, or iodine, led the DoD to push for electrically pumped lasers such as diode pumped alkali lasers (DPALS). An "Inside the Army" weekly report mentions "Directed Energy Master Plan"
References
American inventions | Chemical laser | [
"Chemistry"
] | 1,572 | [
"Chemical reaction engineering",
"Chemical lasers"
] |
3,100,105 | https://en.wikipedia.org/wiki/K%C3%B6the%20conjecture | In mathematics, the Köthe conjecture is a problem in ring theory, open . It is formulated in various ways. Suppose that R is a ring. One way to state the conjecture is that if R has no nil ideal, other than {0}, then it has no nil one-sided ideal, other than {0}.
This question was posed in 1930 by Gottfried Köthe (1905–1989). The Köthe conjecture has been shown to be true for various classes of rings, such as polynomial identity rings and right Noetherian rings, but a general solution remains elusive.
Equivalent formulations
The conjecture has several different formulations:
(Köthe conjecture) In any ring, the sum of two nil left ideals is nil.
In any ring, the sum of two one-sided nil ideals is nil.
In any ring, every nil left or right ideal of the ring is contained in the upper nil radical of the ring.
For any ring R and for any nil ideal J of R, the matrix ideal Mn(J) is a nil ideal of Mn(R) for every n.
For any ring R and for any nil ideal J of R, the matrix ideal M2(J) is a nil ideal of M2(R).
For any ring R, the upper nilradical of Mn(R) is the set of matrices with entries from the upper nilradical of R for every positive integer n.
For any ring R and for any nil ideal J of R, the polynomials with indeterminate x and coefficients from J lie in the Jacobson radical of the polynomial ring R[x].
For any ring R, the Jacobson radical of R[x] consists of the polynomials with coefficients from the upper nilradical of R.
Related problems
A conjecture by Amitsur read: "If J is a nil ideal in R, then J[x] is a nil ideal of the polynomial ring R[x]." This conjecture, if true, would have proven the Köthe conjecture through the equivalent statements above, however a counterexample was produced by Agata Smoktunowicz. While not a disproof of the Köthe conjecture, this fueled suspicions that the Köthe conjecture may be false.
Kegel proved that a ring which is the direct sum of two nilpotent subrings is itself nilpotent. The question arose whether or not "nilpotent" could be replaced with "locally nilpotent" or "nil". Partial progress was made when Kelarev produced an example of a ring which isn't nil, but is the direct sum of two locally nilpotent rings. This demonstrates that Kegel's question with "locally nilpotent" replacing "nilpotent" is answered in the negative.
The sum of a nilpotent subring and a nil subring is always nil.
References
External links
PlanetMath page
Survey paper (PDF)
Ring theory
Conjectures
Unsolved problems in mathematics | Köthe conjecture | [
"Mathematics"
] | 638 | [
"Unsolved problems in mathematics",
"Ring theory",
"Fields of abstract algebra",
"Conjectures",
"Mathematical problems"
] |
5,632,035 | https://en.wikipedia.org/wiki/Congius | In Ancient Roman measurement, congius (pl. congii, from Greek konkhion, diminutive of konkhē, konkhos, "shellful") was a liquid measure that was about 3.48 litres (0.92 U.S. gallons). It was equal to the larger chous of the Ancient Greeks. The congius contained six sextarii.
Cato tells us that he was wont to give each of his slaves a congius of wine at the Saturnalia and Compitalia. Pliny relates, among other examples of hard drinking, that a Novellius Torquatus of Mediolanum obtained a cognomen (Tricongius, a nine-bottle-man) by drinking three congii (approximately 14 modern 75 cl bottles or roughly 2.7 gallons in total) of wine at once:
The Roman system of weights and measures, including the congius, was introduced to Britain in the 1st century by Emperor Claudius. Following the Anglo-Saxon invasions of the 4th and 5th century, Roman units were, for the most part, replaced with North German units. Following the conversion of England to Christianity in the 7th century, Latin became the language of state. From this time on the word "congius" is simply the Latin word for gallon. Thus we find the word congius mentioned in a charter of Edmund I in 946.
In Apothecary Measures, the Latin Congius (abbreviation c.) is used for the Queen Anne gallon of 231 cubic inches, also known as the US gallon.
Congius of Vespasian
William Smith in his book A dictionary of Greek and Roman antiquities says:
There is a congius in existence called the congius of Vespasian or the Farnese congius, bearing an inscription, which states that it was made in the year 75 A.D., according to the standard measure in the capitol, and that it contained, by weight, ten pounds. (Imp. Caes. vi. T. Caes. Aug. F. iiii. Cos. Mensurae exactae in Capitolio, P. x.; see also Festus, Publica Pondera.) By means of this congius the weight of the Roman pound has been ascertained. This congius holds, according to an experiment made by Dr. Hase, in 1824, 52037.692 grains of distilled water.
In 1866, an article entitled On a Congius appeared in the Journal of the British Archaeological Association casting doubt on the authenticity of the Farnese congius. A 1926 article in the journal Ancient Weights and Measures notes that "there is no true patina upon it" and that apparent red oxide is drops of shellac.
The 2002 book Aqueduct hunting in the seventeenth century: Raffaello Fabretti's De aquis et aquaeductibus veteris Romae by Harry B. Evans reports that the original congius of Farnese has been lost and that the extant copies are considered spurious.
On the other hand, according to the 1883 edition of A complete handbook to the National museum in Naples item number 74599 bears the following description:
74599. Measure for liquids,-- the congius spoken of by Pliny. A long-necked vase without handle, bearing the inscription IMP. CAESARE VESPAS. VI. T. CAES. AUG. F. IIII COS. MENSURAE EXACTAE IN CAPITOLIO P. X. -- "measure of the weight of ten pounds gauged at the Capitol in the sixth consulate of the Emperor Caesar Vespasian and the fourth of his son Titus Augustus Caesar" (Borgia.)
See also
Ancient Roman units of measurement
Notes
References
Units of volume
Society of ancient Rome
Ancient Roman units of measurement | Congius | [
"Mathematics"
] | 782 | [
"Units of volume",
"Quantity",
"Units of measurement"
] |
5,632,274 | https://en.wikipedia.org/wiki/Power%20automorphism | In mathematics, in the realm of group theory, a power automorphism of a group is an automorphism that takes each subgroup of the group to within itself. The power automorphism of an infinite group may not restrict to an automorphism on each subgroup. For instance, the automorphism on rational numbers that sends each number to its double is a power automorphism even though it does not restrict to an automorphism on each subgroup.
Alternatively, power automorphisms are characterized as automorphisms that send each element of the group to some power of that element. This explains the choice of the term power. The power automorphisms of a group form a subgroup of the whole automorphism group. This subgroup is denoted as where is the group.
A universal power automorphism is a power automorphism where the power to which each element is raised is the same. For instance, each element may go to its cube. Here are some facts about the powering index:
The powering index must be relatively prime to the order of each element. In particular, it must be relatively prime to the order of the group, if the group is finite.
If the group is abelian, any powering index works.
If the powering index 2 or -1 works, then the group is abelian.
The group of power automorphisms commutes with the group of inner automorphisms when viewed as subgroups of the automorphism group. Thus, in particular, power automorphisms that are also inner must arise as conjugations by elements in the second group of the upper central series.
References
Subgroup lattices of groups by Roland Schmidt (PDF file)
Group theory
Group automorphisms | Power automorphism | [
"Mathematics"
] | 342 | [
"Functions and mappings",
"Mathematical objects",
"Group theory",
"Fields of abstract algebra",
"Mathematical relations",
"Group automorphisms"
] |
5,632,475 | https://en.wikipedia.org/wiki/IA%20automorphism | In mathematics, in the realm of group theory, an IA automorphism of a group is an automorphism that acts as identity on the abelianization. The abelianization of a group is its quotient by its commutator subgroup. An IA automorphism is thus an automorphism that sends each coset of the commutator subgroup to itself.
The IA automorphisms of a group form a normal subgroup of the automorphism group. Every inner automorphism is an IA automorphism.
See also
Torelli group
References
Group theory
Group automorphisms | IA automorphism | [
"Mathematics"
] | 115 | [
"Functions and mappings",
"Mathematical objects",
"Group theory",
"Fields of abstract algebra",
"Mathematical relations",
"Group automorphisms"
] |
5,632,598 | https://en.wikipedia.org/wiki/Quotientable%20automorphism | In mathematics, in the realm of group theory, a quotientable automorphism of a group is an automorphism that takes every normal subgroup to within itself. As a result, it gives a corresponding automorphism for every quotient group.
All family automorphisms are quotientable, and particularly, all class automorphisms and power automorphisms are. As well, all inner automorphisms are quotientable, and more generally, any automorphism defined by an algebraic formula is quotientable.
Group automorphisms | Quotientable automorphism | [
"Mathematics"
] | 114 | [
"Mathematical objects",
"Functions and mappings",
"Mathematical relations",
"Group automorphisms"
] |
5,632,720 | https://en.wikipedia.org/wiki/Class%20automorphism | In mathematics, in the realm of group theory, a class automorphism is an automorphism of a group that sends each element to within its conjugacy class. The class automorphisms form a subgroup of the automorphism group. Some facts:
Every inner automorphism is a class automorphism.
Every class automorphism is a family automorphism and a quotientable automorphism.
Under a quotient map, class automorphisms go to class automorphisms.
Every class automorphism is an IA automorphism, that is, it acts as identity on the abelianization.
Every class automorphism is a center-fixing automorphism, that is, it fixes all points in the center.
Normal subgroups are characterized as subgroups invariant under class automorphisms.
For infinite groups, an example of a class automorphism that is not inner is the following: take the finitary symmetric group on countably many elements and consider conjugation by an infinitary permutation. This conjugation defines an outer automorphism on the group of finitary permutations. However, for any specific finitary permutation, we can find a finitary permutation whose conjugation has the same effect as this infinitary permutation. This is essentially because the infinitary permutation takes permutations of finite supports to permutations of finite support.
For finite groups, the classical example is a group of order 32 obtained as the semidirect product of the cyclic ring on 8 elements, by its group of units acting via multiplication. Finding a class automorphism in the stability group that is not inner boils down to finding a cocycle for the action that is locally a coboundary but is not a global coboundary.
References
Group theory
Group automorphisms | Class automorphism | [
"Mathematics"
] | 379 | [
"Functions and mappings",
"Mathematical objects",
"Group theory",
"Fields of abstract algebra",
"Mathematical relations",
"Group automorphisms"
] |
5,632,946 | https://en.wikipedia.org/wiki/Shift%20work%20sleep%20disorder | Shift work sleep disorder (SWSD) is a circadian rhythm sleep disorder characterized by insomnia, excessive sleepiness, or both affecting people whose work hours overlap with the typical sleep period. Insomnia can be the difficulty to fall asleep or to wake up before the individual has slept enough. About 20% of the working population participates in shift work. SWSD commonly goes undiagnosed, and it is estimated that 10–40% of shift workers have SWSD. The excessive sleepiness appears when the individual has to be productive, awake and alert. Both symptoms are predominant in SWSD. There are numerous shift work schedules, and they may be permanent, intermittent, or rotating; consequently, the manifestations of SWSD are quite variable. Most people with different schedules than the ordinary one (from 8 AM to 6 PM) might have these symptoms but the difference is that SWSD is continual, long-term, and starts to interfere with the individual's life.
Health effects
There have been many studies suggesting health risks associated with shift work. Many studies have associated sleep disorders with decreased bone mineral density (BMD) and risk for fracture. Researchers have found that those who work long-term in night positions, like nurses, are at a great risk for wrist and hip fractures (RR=1.37). Low fertility and issues during pregnancy are increased in shift workers. Obesity, diabetes, insulin resistance, elevated body fat levels and dyslipidemias were shown to be much higher in those who work night shift. SWSD can increase the risk of mental disorders. Specifically, depression, anxiety, and alcohol use disorder is increased in shift workers. Because the circadian system regulates the rate of chemical substances in the body, when it is impaired, several consequences are possible. Acute sleep loss has been shown to increase the levels of t-tau in blood plasma, which may explain the neurocognitive effects of sleep loss.
Sleep quality
Sleep loss and decreased quality of sleep is another effect of shift work. To promote a healthy lifestyle, the American Academy of Sleep Medicine recommended that an adult have 7 or more hours of sleep per day. Each year, there are almost 100,000 deaths estimated in the U.S. because of medical errors. Sleep deprivation and sleep disorders are factors that contribute significantly to these errors. In the same article, the authors affirm that there is a high prevalence of sleepiness and symptoms of sleep disorders related to the circadian system in medical center nurses. In a study done with around 1100 nurses, almost half of them (49%) reported sleeping less than 7 hours per day, a significant increase compared to national figures, in which 28% of people claimed to sleep less than 7 hours per night. Having a lack of sleep can impact cognitive performance. For example, it might become difficult to stay focused and concentrate, and reaction times might also be slowed down. SWSD might interfere with making decisions quickly, driving, or flying safely. Sleep loss seen in shift workers greatly impairs cognitive performance, being awake for 24 hrs. straight results in a cognitive performance that is equal to a blood-alcohol of 0.10, which is over the legal limit in most states. All of these factors can affect work efficiency and cause accidents. Michael Lee et al. demonstrated that those working night shifts had a significantly higher risk of hazardous driving events when compared to those on a typical day shift schedule. Accidents in the workplace have been found to be 60% higher in shift workers. They can affect the individual's social life and cause a lack of well-being and happiness. Poor sleep quality has also been associated with decreased quality of life, based on a SF-36 assessment.
Sleep and alertness
Although SWD affects many shift workers, its manifestation is still unclear within the general shift-working population. A field study investigating the nature of SWD in an experimental (with SWD) and control (non-SWD) group of Finnish shift workers revealed decreased total sleep time (TST) and increased sleep deficit before morning shifts. Furthermore, the SWD group also exhibited decreased objective sleep efficiency, decrease in sleep compensation over the free days, increased sleep latency, and finally poorer sleep quality was recorded in the SWD group compared to the non-SWD group. Moreover, shift workers with SWD scored significantly higher on the Karolinska Sleepiness Scale (KSS) when assessed at the beginning and the end of morning shifts and at the end of night shifts, while having more attentional lapses at the beginning of night shifts.
Many studies have shown evidence of how partial and total sleep deprivation affects work productivity, absenteeism, fatal workplace accidents, and more. In a study by Akkerstedt et al., those who had a hard time sleeping in the past two weeks were at a greater risk for having a fatal workplace accident (RR=1.89, 95% CI 1.22–2.94). Other sleep disorders, like OSA which are risk factors for SWSD, have also been associated with low productivity, absenteeism, and accidents. At a cognitive level, sleep deprivation has been shown to cause decreased attentiveness, increased micro-sleeps, delayed psychomotor response, performance deterioration, neglect of activities, decline in working memory, and more.
Immune functioning
Partial and total sleep deprivation has been linked to an increase of pro-inflammatory markers, such as IL-6, and a decrease in anti-inflammatory markers, such as IL-10, that plays a role in tumor suppression. Chronic shift work has been associated with decreased immune function in nurses. In a study by Naigi, et al., over the course of a shift, nurses exhibited decreasing levels of Natural Killer cells, an innate immune response that plays a role in infectious disease and tumor suppression. Other researchers have found that less sleep at night increased the risk of developing the common cold. A supporting study by Moher et al. showed that shift workers were more likely to develop infectious diseases after exposure compared to daytime workers. A poorly functioning immune system may leave workers vulnerable for developing occupational illnesses. Sleep loss is also associated with an increase in TNF, a marker of systemic immune functioning.
Cardiovascular disease
Decreased sleep quality and duration have also been associated with other chronic illnesses, such as cardiovascular disease. Many studies have shown that prolonged sleeplessness and sleep disorders, such as OSA, increases systemic levels of CRP, a marker of cardiovascular disease. Many studies have shown that lack of sleep causes blood pressure to increase from the prolonged stimulation of the nervous system. The increase of inflammatory markers, like IL-6, up-regulate the production of CRP.
SWSD in firefighters
SWSD can affect many occupations, but firefighters and Emergency Medical Technicians are at a greater risk because of their extended (24hr) shift and frequent sleep interruptions due to emergencies. Many firefighters have sleep disorders as a result of their extended shift and frequently disrupted sleep. In a study on firefighters by Barger, et al., over a third of study participants screened positive for a sleep disorder, but most had not received a previous medical diagnosis for any sleep disorders. Those with sleep disorders were also at a higher risk for being in a motor vehicle crash (OR=2.0 95% CI 1.29–3.12, p=0.0021), near crash (OR=2.49 95% CI 2.13–2.91, p < 0.0001), and nodding off while driving (OR=2.41 95% CI 2.06–2.82, p < 0.0001).
Symptoms
Excessive sleepiness
Difficulty sleeping
Difficulty concentrating
Headaches
Lack of energy
Cause and prevalence
Insomnia and wake-time sleepiness are related to misalignment between the timing of a non-standard wake–sleep schedule and the endogenous circadian propensity for sleep and wake. In addition to circadian misalignment, attempted sleep at unusual times can be interrupted by noise, social obligations, and other factors. There is an inevitable degree of sleep deprivation associated with sudden transitions in sleep schedule.
The prevalence of SWSD is unclear because it is not often formally diagnosed and its definitions vary in scientific literature. However, SWSD is estimated to affect 2–10% of general population and about 27% of night and rotating workers. The use of the third edition of the International Classification of Sleep Disorders (ICSD-3) criteria has decreased the prevalence estimates of SWSD compared to the old ICSD-2 criteria after 2014.
There are various risk factors, including age. Although SWSD can appear at any age, the highest prevalence is in the 50 years old and above age bracket and even more so in cases of irregular schedules. Gender is also a factor. It may be that female night workers sleep less than their male counterparts. A possible explanation is the social obligations that can increase their vulnerability to SWSD. Female night workers also seem to be more sleepy at work.
Some people are more affected by shift work and sleeplessness than others, and some will be impaired on some tasks while others will always perform well on the same tasks. Some people have a morning preference but others not. Genetic predisposition is an important predictor of which people are vulnerable to SWSD.
Medical field
Cognitive impact
Shift work sleep disorder affects many individuals, especially those within the medical field. Research done by The Journal Of Sleep Research examine the difference in cognitive function using sleep-deprived and well-rested nurses using autobiographical memory skills. The participants underwent the autobiographical memory test, as well as anxiety and depression inventories. The researchers found that a sleep-deprived group of individuals scored significantly higher in the depression score and remembered more negative than positive memories. The sleep-deprived group also scored significantly lower than the well-rested group in autobiographical memory and specific memories. This study is similar to the one done by the National Center for Biotechnology Information, which found that their hypothesis of sleep deprivation and the cognitive impact it has on nurses was strongly supported in 69% of shift workers. The impairment in cognitive performance, such as general intellect, reaction time, and memory, was statistically significant among the staff nurses due to poor sleep quality and decreased alertness while awake.
Patient care
Shift work sleep disorder affects patient care within all aspects of the medical field. Research published in European Review for Medical and Pharmacological Sciences analyzed the correlation between the clinical risk management and the occurrence of medication errors and the effects of the shift work on inpatient nurses. The researchers reviewed 19 out of 217 research articles and focused on the impact of workload, shifts and sleep deprivation on the probability of making medication errors. They found that the main reason behind medication errors are stress, fatigue, increased workload, night shifts, nurse staffing ratio and workflow interruptions.
Mechanism
Brain arousal is stimulated by the circadian system during the day and sleep is usually stimulated at night. The rhythms are maintained in the suprachiasmatic nucleus (SCN), located in the anterior hypothalamus in the brain, and synchronized with the day/night cycle. Gene-transcription feedback loops in individual SCN cells form the molecular basis of biological timekeeping. Circadian phase shifts are dependent on the schedule of light exposure, the intensity, and previous exposure to light. Variations in exposure can advance or delay these rhythms. For example, the rhythms can be delayed due to light exposure at night.
Photoreceptors located in the retina of the eye send information about environmental light through the retinohypothalamic tract to the SCN. The SCN regulates the pineal gland, which secretes the hormone melatonin. Typically, the secretion of melatonin begins two hours before bedtime and ends two hours prior to waking up. A decline in neuronal firing in the SCN is caused by the binding of melatonin to the MT1 and MT2 melatonin receptors. It is believed that the reduction in firing in the SCN stimulates sleep. While day-active individuals produce melatonin at night, night shift workers' production of melatonin is suppressed at night due to light exposure.
Circadian misalignment
Circadian misalignment plays a major role in shift work sleep disorder. Circadian misalignment occurs when there is no complete adaptation to a night shift schedule. The hormones cortisol and melatonin are an important part of the circadian rhythm. In circadian misalignment, cortisol and melatonin lack entrainment to a night oriented schedule and stay on a daytime schedule. Melatonin continues to peak at night during a shift workers awake time and decreases during a shift workers sleep time. Cortisol levels are lower during a shift workers awake time and remain higher during shift workers sleep time.
Diagnosis
The primary symptoms of shift work sleep disorder are insomnia and excessive sleepiness associated with working (and sleeping) at non-standard times. Shift work sleep disorder is also associated with falling asleep at work. Total daily sleep time is usually shortened and sleep quality is less in those who work night shifts compared to those who work day shifts. Sleepiness is manifested as a desire to nap, unintended dozing, impaired mental acuity, irritability, reduced performance, and accident proneness. Shift work is often combined with extended hours of duty, so fatigue can be a compounding factor. The symptoms coincide with the duration of shift work and usually remit with the adoption of a conventional sleep-wake schedule. The boundary between a "normal response" to the rigors of shift work and a diagnosable disorder is not sharp.
There are criteria of SWSD in the International Classification of Diseases (ICD-10), in the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), and in the International Classification of Sleep Disorders (ICSD) – Second and Third Editions. The diagnosis requires the following assumptions :
There is an insomnia or/and an excessive sleepiness with a reduction of total sleep time, all combined with an overlap of work period occurring during the habitual sleep time.
The presence of these symptoms has lasted for at least 3 months and are associated with the shift work schedules.
Sleep log and/or actigraphy monitoring (with sleep diaries) demonstrate for more than 14 days (work and free days included) circadian and sleep-time misalignment.
Sleep disturbance is associated with impairment of social, occupational, and/or other waking functioning.
These symptoms are not better explained by another sleep disorder, medical or neurologic disorder, mental disorder, medication use, poor sleep hygiene, or substance use disorder.
Assessments
There are different tools to assess shift work disorder. Patients can keep a diary. Some questionnaires could be useful as the Morningness-Eveningness Questionnaire. Actigraphy and polysomnography could indicate some interesting patterns. Further studies are needed to see if some phase markers as the body temperature rhythm or the melatonin rhythm are efficient to assess shift work disorder. Decreased sleep quality may be assessed using the Pittsburg Sleep Quality Index (PQSI).
Treatment
Prescribed sleep/wake scheduling
Experts agree that there is no such thing as an "ideal" night work schedule, but some schedules may be better than others. For example, rotating shifts every two weeks in a forward (delaying) direction was found to be easier than rotation in a backward (advancing) direction. Gradual delays ("nudging" the circadian system about an hour per day) has been shown in a laboratory setting to maintain synchrony between sleep and the endogenous circadian rhythms, but this schedule is impractical for most real world settings. Some experts have advocated short runs (1 to 2 days) of night work with time for recovery; however, in the traditional heavy industries, longer (5 to 7 day) runs remain the rule. In the end, scheduling decisions usually involve maximizing leisure time, fairness in labor relations, etc. rather than chronobiological considerations. Shift workers can benefit from adhering to sleep hygiene practices related to sleep/wake scheduling. Symptoms typically only fully resolve once a normal sleep schedule is resumed.
Many night workers take naps during their breaks, and in some industries, planned napping at work (with facilities provided) is beginning to be accepted. A nap before starting a night shift is a logical prophylactic measure. However, naps that are too long (over 30 minutes) may generate sleep inertia, a groggy feeling after awakening that can impair performance. Therefore, brief naps (10 to 30 minutes) are preferred to longer naps (over 30 minutes). Also, long naps may interfere with the main sleep period.
In the transportation industry, safety is a major concern, and mandated hours of service rules attempt to enforce rest times.
Bright light treatment
The light-dark cycle is the most important environmental time cue for entraining circadian rhythms of most species, including humans, and bright artificial light exposure has been developed as a method to improve circadian adaptation in night workers. The timing of bright light exposure is critical for its phase shifting effects. To maximize a delay of the body clock, bright light exposure should occur in the evening or first part of the night, and bright light should be avoided in the morning. Wearing dark goggles (avoiding bright light) or blue-blocking goggles during the morning commute home from work can improve circadian adaptation. For workers who want to use bright light therapy, appropriate fixtures of the type used to treat winter depression are readily available but patients need to be educated regarding their appropriate use, especially the issue of timing. Bright light treatment is not recommended for patients with light sensitivity or ocular disease.
Melatonin treatment
Melatonin is a hormone secreted by the pineal gland in darkness, normally at night. Its production is suppressed by light exposure, principally blue light around 460 to 480 nm. Light restriction, or dark therapy, in the hours before bedtime allows its production. Dark therapy does not require total darkness. Amber or orange colored goggles eliminate blue light to the eyes while allowing vision.
Melatonin is also available as an oral supplement. In the US and Canada, the hormone melatonin is not classified as a drug; it is sold as a dietary supplement. In other countries, it requires a prescription or is unavailable. Although it is not licensed by the FDA as a treatment for any disorder, there have been no serious side effects or complications reported to date.
Melatonin has been shown to accelerate the adaptation of the circadian system to a nighttime work schedule. Melatonin may benefit daytime sleep in night workers by an additional direct sleep promoting mechanism. Melatonin treatment may increase sleep length during both daytime and nighttime sleep in night shift workers.
Medications that promote alertness
Caffeine is the most widely used alerting drug in the world and has been shown to improve alertness in simulated night work. Caffeine and naps before a night shift reduces sleepiness during the shift. Night shift medical field workers report the highest activity, along with the least amount of sleep. These individuals require medication/power naps to function at their best. Modafinil and armodafinil are non-amphetamine alerting drugs originally developed for the treatment of narcolepsy that have been approved by the FDA (the US Food and Drug Administration) for excessive sleepiness associated with SWSD.
Medications that promote daytime sleep
Obtaining enough sleep during the day is a major problem for many night workers. Hypnotics given in the morning can lengthen daytime sleep; however, some studies have shown that nighttime sleepiness may be unaffected. Zopiclone has been shown to be ineffective in increasing sleep in shift workers.
See also
Shift work
Jet lag
Human factors
Human reliability
References
External links
Working conditions
Circadian rhythm
Sleep disorders | Shift work sleep disorder | [
"Biology"
] | 4,059 | [
"Behavior",
"Sleep",
"Sleep disorders",
"Circadian rhythm"
] |
5,633,026 | https://en.wikipedia.org/wiki/Fixture%20%28tool%29 | A fixture is a work-holding or support device used in the manufacturing industry. Fixtures are used to securely locate (position in a specific location or orientation) and support the work, ensuring that all parts produced using the fixture will maintain conformity and interchangeability. Using a fixture improves the economy of production by allowing smooth operation and quick transition from part to part, reducing the requirement for skilled labor by simplifying how workpieces are mounted, and increasing conformity across a production run.
Compared with a jig
A fixture differs from a jig in that when a fixture is used, the tool must move relative to the workpiece; a jig moves the piece while the tool remains stationary.
Purpose
A fixture's primary purpose is to create a secure mounting point for a workpiece, allowing for support during operation and increased accuracy, precision, reliability, and interchangeability in the finished parts. It also serves to reduce working time by allowing quick set-up, and by smoothing the transition from part to part. It frequently reduces the complexity of a process, allowing for unskilled workers to perform it and effectively transferring the skill of the tool maker to the unskilled worker. Fixtures also allow for a higher degree of operator safety by reducing the concentration and effort required to hold a piece steady.
Economically speaking the most valuable function of a fixture is to reduce labor costs. Without a fixture, operating a machine or process may require two or more operators; using a fixture can eliminate one of the operators by securing the workpiece.
Design
Fixtures should be designed with economics in mind; the purpose of these devices is often to reduce costs, and so they should be designed in such a way that the cost reduction outweighs the cost of implementing the fixture. It is usually better, from an economic standpoint, for a fixture to result in a small cost reduction for a process in constant use, than for a large cost reduction for a process used only occasionally.
Most fixtures have a solid component, affixed to the floor or to the body of the machine and considered immovable relative to the motion of the machining bit, and one or more movable components known as clamps. These clamps (which may be operated by many different mechanical means) allow workpieces to be easily placed in the machine or removed, and yet stay secure during operation. Many are also adjustable, allowing for workpieces of different sizes to be used for different operations. Fixtures must be designed such that the pressure or motion of the machining operation (usually known as the feed) is directed primarily against the solid component of the fixture. This reduces the likelihood that the fixture will fail, interrupting the operation and potentially causing damage to infrastructure, components, or operators.
Fixtures may also be designed for very general or simple uses. These multi-use fixtures tend to be very simple themselves, often relying on the precision and ingenuity of the operator, as well as surfaces and components already present in the workshop, to provide the same benefits of a specially-designed fixture. Examples include workshop vises, adjustable clamps, and improvised devices such as weights and furniture.
Each component of a fixture is designed for one of two purposes: location or support.
Location
Locating components ensure the geometrical stability of the workpiece. They make sure that the workpiece rests in the correct position and orientation for the operation by addressing and impeding all the degrees of freedom the workpiece possesses.
For locating workpieces, fixtures employ pins (or buttons), clamps, and surfaces. These components ensure that the workpiece is positioned correctly, and remains in the same position throughout the operation. Surfaces provide support for the piece, pins allow for precise location at low surface area expense, and clamps allow for the workpiece to be removed or its position adjusted. Locating pieces tend to be designed and built to very tight specifications.
Support
In designing the locating parts of a fixture, only the direction of forces applied by the operation are considered, and not their magnitude. Locating parts technically support the workpiece, but do not take into account the strength of forces applied by the process and so are usually inadequate to actually secure the workpiece during operation. For this purpose, support components are used.
To secure workpieces and prevent motion during operation, support components primarily use two techniques: positive stops and friction. A positive stop is any immovable component (such as a solid surface or pin) that, by its placement, physically impedes the motion of the workpiece. Support components are more likely to be adjustable than locating components, and normally do not press tightly on the workpiece or provide absolute location.
Support components usually bear the brunt of the forces delivered during the operation. To reduce the chances of failure, support components are usually not also designed as clamps.
For example: 2 heavy metal parts are to be joined with screws and arc welding. Using a fixture will help secure the two separate parts in a designated area for the craftsman to complete the job easily & without the risk of injury.
Types of fixtures
Fixtures are usually classified according to the machine for which they were designed. The most common two are milling fixtures and drill fixtures.
Milling fixtures
Milling operations tend to involve large, straight cuts that produce many chips and involve varying force. Locating and supporting areas must usually be large and very sturdy in order to accommodate milling operations; strong clamps are also a requirement. Due to the vibration of the machine, positive stops are preferred over friction for securing the workpiece. For high-volume automated processes, milling fixtures usually involve hydraulic or pneumatic clamps.
Drilling fixtures
Drilling fixtures cover a wider range of different designs and procedures than milling fixtures. Though workholding for drills is more often provided by jigs, fixtures are also used for drilling operations.
Two common elements of drilling fixtures are the hole and bushing. Holes are often designed into drilling fixtures, to allow space for the drill bit itself to continue through the workpiece without damaging the fixture or drill, or to guide the drill bit to the appropriate point on the workpiece. Bushings are simple bearing sleeves inserted into these holes to protect them and guide the drill bit.
Because drills tend to apply force in only one direction, support components for drilling fixtures may be simpler. If the drill is aligned pointing down, the same support components may compensate for the forces of both the drill and gravity at once. However, though monodirectional, the force applied by drills tends to be concentrated on a very small area. Drilling fixtures must be designed carefully to prevent the workpiece from bending under the force of the drill.
Welding fixtures
Welding fixtures are used to hold subcomponents of a welded assembly in place for fabrication together into one complete unit. These fixtures are often actuated using manual (hand) clamps or pneumatic clamps if paired with robotic automation. A robust robotic arc welding fixture is a part-holding tool used to constrain components for welding in an automated system. Welding fixtures locate parts using these clamps to secure important aspects of the subcomponent, such as holes, slots, or datum surfaces.
Careful considerations must be made when designing welding fixtures. Proper clearance must be allowed for welding torch access. This can be especially difficult to accommodate if the torch is a large spot-welding gun. The welding fixture must be designed to allow all subcomponent parts to nest together properly to obtain the necessary amount of gap for fusion. Weld orientation is a paramount concern, as sloping or vertical welds can lead to weld drip, which will result in cratering and undercutting where the bead should blend into the base metals, resulting in a weak weld and a risk of cracking at the edge of the bead.
Similar build strategies are used for welding fixtures that are employed with milling fixtures and drilling fixtures. The weld torch is most often moved to the workpiece. Welding jigs, in comparison, are commonly used with pedestal welders and linear weld torches, moving the workpiece to the torch. Modular fixturing strategies can be deployed in production scenarios where a setup is needed for a run of the same part in a shorter period of time. Fixture plates and common workholding solutions are designed to accommodate these scenarios.
See also
Clamp
Degrees of freedom (engineering)
Kinematic coupling
Jig
Gas Metal Arc Welding
Notes
References
Industrial equipment
Metalworking tools
Tools
Woodworking jigs
Holders | Fixture (tool) | [
"Engineering"
] | 1,705 | [
"nan"
] |
5,633,169 | https://en.wikipedia.org/wiki/Protocrystalline | A protocrystalline phase is a distinct phase occurring during crystal growth, which evolves into a microcrystalline form. The term is typically associated with silicon films in optical applications such as solar cells.
Applications
Silicon solar cells
Amorphous silicon (a-Si) is a popular solar cell material owing to its low cost and ease of production. Owing to its disordered structure (Urbach tail), its absorption extends to the energies below the band gap, resulting in a wide-range spectral response; however, it has a relatively low solar cell efficiency. Protocrystalline Si (pc-Si:H) also has a relatively low absorption near the band gap, owing to its more ordered crystalline structure. Thus, protocrystalline and amorphous silicon can be combined in a tandem solar cell, where the top thin layer of a-Si:H absorbs short-wavelength light whereas the underlying protocrystalline silicon layer absorbs the longer wavelengths
See also
Amorphous silicon
Crystallite
Multijunction
Polycarbonate (PC)
Polyethylene terephthalate (PET)
References
External links
Crystallography
Thin-film cells | Protocrystalline | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 242 | [
"Materials science stubs",
"Thin-film cells",
"Materials science",
"Crystallography stubs",
"Crystallography",
"Condensed matter physics",
"Planes (geometry)",
"Thin films"
] |
5,633,256 | https://en.wikipedia.org/wiki/Stability%20group | In mathematics, in the realm of group theory, the stability group of subnormal series is the group of automorphisms that act as identity on each quotient group.
Group theory | Stability group | [
"Mathematics"
] | 40 | [
"Group theory",
"Fields of abstract algebra"
] |
5,634,235 | https://en.wikipedia.org/wiki/Small%20cleaved%20cells | Small cleaved cells are a distinctive type of cell that appears in certain types of lymphoma.
When used to uniquely identify a type of lymphoma, they are usually categorized as follicular () or diffuse () .
The "small cleaved cells" are usually centrocytes that express B-cell markers such as CD20. The disease is strongly correlated with the genetic translocation t(14;18), which results in juxtaposition of the bcl-2 proto-oncogene with the heavy chain JH locus, and thus in overexpression of bcl-2. Bcl-2 is a well known anti-apoptotic gene, and thus its overexpression results in the "failure to die" motif of cancer seen in follicular lymphoma.
Follicular lymphoma must be carefully monitored, as it often progresses into a more aggressive "Diffuse Large B-Cell Lymphoma."
External links
Histopathology | Small cleaved cells | [
"Chemistry"
] | 215 | [
"Histopathology",
"Microscopy"
] |
5,634,341 | https://en.wikipedia.org/wiki/Bioconversion%20of%20biomass%20to%20mixed%20alcohol%20fuels | The bioconversion of biomass to mixed alcohol fuels can be accomplished using the MixAlco process. Through bioconversion of biomass to a mixed alcohol fuel, more energy from the biomass will end up as liquid fuels than in converting biomass to ethanol by yeast fermentation.
The process involves a biological/chemical method for converting any biodegradable material (e.g., urban wastes, such as municipal solid waste, biodegradable waste, and sewage sludge, agricultural residues such as corn stover, sugarcane bagasse, cotton gin trash, manure) into useful chemicals, such as carboxylic acids (e.g., acetic, propionic, butyric acid), ketones (e.g., acetone, methyl ethyl ketone, diethyl ketone) and biofuels, such as a mixture of primary alcohols (e.g., ethanol, propanol, n-butanol) and/or a mixture of secondary alcohols (e.g., isopropanol, 2-butanol, 3-pentanol). Because of the many products that can be economically produced, this process is a true biorefinery.
The process uses a mixed culture of naturally occurring microorganisms found in natural habitats such as the rumen of cattle, termite guts, and marine and terrestrial swamps to anaerobically digest biomass into a mixture of carboxylic acids produced during the acidogenic and acetogenic stages of anaerobic digestion, however with the inhibition of the methanogenic final stage. The more popular methods for production of ethanol and cellulosic ethanol use enzymes that must be isolated first to be added to the biomass and thus convert the starch or cellulose into simple sugars, followed then by yeast fermentation into ethanol. This process does not need the addition of such enzymes as these microorganisms make their own.
As the microorganisms anaerobically digest the biomass and convert it into a mixture of carboxylic acids, the pH must be controlled. This is done by the addition of a buffering agent (e.g., ammonium bicarbonate, calcium carbonate), thus yielding a mixture of carboxylate salts. Methanogenesis, being the natural final stage of anaerobic digestion, is inhibited by the presence of the ammonium ions or by the addition of an inhibitor (e.g., iodoform). The resulting fermentation broth contains the produced carboxylate salts that must be dewatered. This is achieved efficiently by vapor-compression evaporation. Further chemical refining of the dewatered fermentation broth may then take place depending on the final chemical or biofuel product desired.
The condensed distilled water from the vapor-compression evaporation system is recycled back to the fermentation. On the other hand, if raw sewage or other waste water with high BOD in need of treatment is used as the water for the fermentation, the condensed distilled water from the evaporation can be recycled back to the city or to the original source of the high-BOD waste water. Thus, this process can also serve as a water treatment facility, while producing valuable chemicals or biofuels.
Because the system uses a mixed culture of microorganisms, besides not needing any enzyme addition, the fermentation requires no sterility or aseptic conditions, making this front step in the process more economical than in more popular methods for the production of cellulosic ethanol. These savings in the front end of the process, where volumes are large, allows flexibility for further chemical transformations after dewatering, where volumes are small.
Carboxylic acids
Carboxylic acids can be regenerated from the carboxylate salts using a process known as "acid springing". This process makes use of a high-molecular-weight tertiary amine (e.g., trioctylamine), which is switched with the cation (e.g., ammonium or calcium). The resulting amine carboxylate can then be thermally decomposed into the amine itself, which is recycled, and the corresponding carboxylic acid. In this way, theoretically, no chemicals are consumed or wastes produced during this step.
Ketones
There are two methods for making ketones. The first one consists on thermally converting calcium carboxylate salts into the corresponding ketones. This was a common method for making acetone from calcium acetate during World War I. The other method for making ketones consists on converting the vaporized carboxylic acids on a catalytic bed of zirconium oxide.
Alcohols
Primary alcohols
The undigested residue from the fermentation may be used in gasification to make hydrogen (H2). This H2 can then be used to hydrogenolyze the esters over a catalyst (e.g., copper chromite), which are produced by esterifying either the ammonium carboxylate salts (e.g., ammonium acetate, propionate, butyrate) or the carboxylic acids (e.g., acetic, propionic, butyric acid) with a high-molecular-weight alcohol (e.g., hexanol, heptanol). From the hydrogenolysis, the final products are the high-molecular-weight alcohol, which is recycled back to the esterification, and the corresponding primary alcohols (e.g., ethanol, propanol, butanol).
Secondary alcohols
The secondary alcohols (e.g., isopropanol, 2-butanol, 3-pentanol) are obtained by hydrogenating over a catalyst (e.g., Raney nickel) the corresponding ketones (e.g., acetone, methyl ethyl ketone, diethyl ketone).
Drop-in biofuels
The primary or secondary alcohols obtained as described above may undergo conversion to drop-in biofuels, fuels which are compatible with current fossil fuel infrastructure such as biogasoline, green diesel and bio-jet fuel. Such is done by subjecting the alcohols to dehydration followed by oligomerization using zeolite catalysts in a manner similar to the methanex process, which used to produce gasoline from methanol in New Zealand.
Acetic acid versus ethanol
Cellulosic-ethanol manufacturing plants are bound to be net exporters of electricity because a large portion of the lignocellulosic biomass, namely lignin, remains undigested and it must be burned, thus producing electricity for the plant and excess electricity for the grid. As the market grows and this technology becomes more widespread, coupling the liquid fuel and the electricity markets will become more and more difficult.
Acetic acid, unlike ethanol, is biologically produced from simple sugars without the production of carbon dioxide:
C6H12O6 → 2 CH3CH2OH + 2 CO2
C6H12O6 → 3 CH3COOH
Because of this, on a mass basis, the yields will be higher than in ethanol fermentation. If then, the undigested residue (mostly lignin) is used to produce hydrogen by gasification, it is ensured that more energy from the biomass will end up as liquid fuels rather than excess heat/electricity.
3 CH3COOH + 6 H2 → 3 CH3CH2OH + 3 H2O
C6H12O6 (from cellulose) + 6 H2 (from lignin) → 3 CH3CH2OH + 3 H2O
A more comprehensive description of the economics of each of the fuels is given on the pages alcohol fuel and ethanol fuel, more information about the economics of various systems can be found on the central page biofuel.
Stage of development
The system has been in development since 1991, moving from the laboratory scale (10 g/day) to the pilot scale (200 lb/day) in 2001. A small demonstration-scale plant (5 ton/day) has been constructed and is under operation and a 220 ton/day demonstration plant is expected in 2012.
See also
Anaerobic digestion
Bioreactor
Mechanical biological treatment
References
Anaerobic digestion
Biodegradable waste management
Alcohol fuels
Waste treatment technology
Biomass | Bioconversion of biomass to mixed alcohol fuels | [
"Chemistry",
"Engineering"
] | 1,762 | [
"Water treatment",
"Biodegradable waste management",
"Biodegradation",
"Anaerobic digestion",
"Environmental engineering",
"Water technology",
"Waste treatment technology"
] |
5,634,358 | https://en.wikipedia.org/wiki/Ekman%20transport | Ekman transport is part of Ekman motion theory, first investigated in 1902 by Vagn Walfrid Ekman. Winds are the main source of energy for ocean circulation, and Ekman transport is a component of wind-driven ocean current. Ekman transport occurs when ocean surface waters are influenced by the friction force acting on them via the wind. As the wind blows it casts a friction force on the ocean surface that drags the upper 10-100m of the water column with it. However, due to the influence of the Coriolis effect, the ocean water moves at a 90° angle from the direction of the surface wind. The direction of transport is dependent on the hemisphere: in the northern hemisphere, transport occurs at 90° clockwise from wind direction, while in the southern hemisphere it occurs at 90° anticlockwise. This phenomenon was first noted by Fridtjof Nansen, who recorded that ice transport appeared to occur at an angle to the wind direction during his Arctic expedition of the 1890s. Ekman transport has significant impacts on the biogeochemical properties of the world's oceans. This is because it leads to upwelling (Ekman suction) and downwelling (Ekman pumping) in order to obey mass conservation laws. Mass conservation, in reference to Ekman transfer, requires that any water displaced within an area must be replenished. This can be done by either Ekman suction or Ekman pumping depending on wind patterns.
Theory
Ekman theory explains the theoretical state of circulation if water currents were driven only by the transfer of momentum from the wind. In the physical world, this is difficult to observe because of the influences of many simultaneous current driving forces (for example, pressure and density gradients). Though the following theory technically applies to the idealized situation involving only wind forces, Ekman motion describes the wind-driven portion of circulation seen in the surface layer.
Surface currents flow at a 45° angle to the wind due to a balance between the Coriolis force and the drags generated by the wind and the water. If the ocean is divided vertically into thin layers, the magnitude of the velocity (the speed) decreases from a maximum at the surface until it dissipates. The direction also shifts slightly across each subsequent layer (right in the northern hemisphere and left in the southern hemisphere). This is called the Ekman spiral. The layer of water from the surface to the point of dissipation of this spiral is known as the Ekman layer. If all flow over the Ekman layer is integrated, the net transportation is at 90° to the right (left) of the surface wind in the northern (southern) hemisphere.
Mechanisms
There are three major wind patterns that lead to Ekman suction or pumping. The first are wind patterns that are parallel to the coastline. Due to the Coriolis effect, surface water moves at a 90° angle to the wind current. If the wind moves in a direction causing the water to be pulled away from the coast then Ekman suction will occur. On the other hand, if the wind is moving in such a way that surface waters move towards the shoreline then Ekman pumping will take place.
The second mechanism of wind currents resulting in Ekman transfer is the Trade Winds both north and south of the equator pulling surface waters towards the poles. There is a great deal of upwelling Ekman suction at the equator because water is being pulled northward north of the equator and southward south of the equator. This leads to a divergence in the water, resulting in Ekman suction, and therefore, upwelling.
The third wind pattern influencing Ekman transfer is large-scale wind patterns in the open ocean. Open ocean wind circulation can lead to gyre-like structures of piled up sea surface water resulting in horizontal gradients of sea surface height. This pile up of water causes the water to have a downward flow and suction, due to gravity and mass balance. Ekman pumping downward in the central ocean is a consequence of this convergence of water.
Ekman suction
Ekman suction is the component of Ekman transport that results in areas of upwelling due to the divergence of water. Returning to the concept of mass conservation, any water displaced by Ekman transport must be replenished. As the water diverges it creates space and acts as a suction in order to fill in the space by pulling up, or upwelling, deep sea water to the euphotic zone.
Ekman suction has major consequences for the biogeochemical processes in the area because it leads to upwelling. Upwelling carries nutrient rich, and cold deep-sea water to the euphotic zone, promoting phytoplankton blooms and kickstarting an extremely high-productive environment. Areas of upwelling lead to the promotion of fisheries, in fact nearly half of the world's fish catch comes from areas of upwelling.
Ekman suction occurs both along coastlines and in the open ocean, but also occurs along the equator. Along the Pacific coastline of California, Central America, and Peru, as well as along the Atlantic coastline of Africa there are areas of upwelling due to Ekman suction, as the currents move equatorwards. Due to the Coriolis effect the surface water moves 90° to the left (in the South Hemisphere, as it travels toward the equator) of the wind current, therefore causing the water to diverge from the coast boundary, leading to Ekman suction. Additionally, there are areas of upwelling as a consequence of Ekman suction where the Polar Easterlies winds meet the Westerlies in the subpolar regions north of the subtropics, as well as where the Northeast Trade Winds meet the Southeast Trade Winds along the Equator. Similarly, due to the Coriolis effect the surface water moves 90° to the left (in the South Hemisphere) of the wind currents, and the surface water diverges along these boundaries, resulting in upwelling in order to conserve mass.
Ekman pumping
Ekman pumping is the component of Ekman transport that results in areas of downwelling due to the convergence of water. As discussed above, the concept of mass conservation requires that a pile up of surface water must be pushed downward. This pile up of warm, nutrient-poor surface water gets pumped vertically down the water column, resulting in areas of downwelling.
Ekman pumping has dramatic impacts on the surrounding environments. Downwelling, due to Ekman pumping, leads to nutrient poor waters, therefore reducing the biological productivity of the area. Additionally, it transports heat and dissolved oxygen vertically down the water column as warm oxygen rich surface water is being pumped towards the deep ocean water.
Ekman pumping can be found along the coasts as well as in the open ocean. Along the Pacific Coast in the Southern Hemisphere northerly winds move parallel to the coastline. Due to the Coriolis effect the surface water gets pulled 90° to the left of the wind current, therefore causing the water to converge along the coast boundary, leading to Ekman pumping. In the open ocean Ekman pumping occurs with gyres. Specifically, in the subtropics, between 20°N and 50°N, there is Ekman pumping as the tradewinds shift to westerlies causing a pile up of surface water.
Mathematical derivation
Some assumptions of the fluid dynamics involved in the process must be made in order to simplify the process to a point where it is solvable. The assumptions made by Ekman were:
no boundaries;
infinitely deep water;
eddy viscosity, , is constant (this is only true for laminar flow. In the turbulent atmospheric and oceanic boundary layer it is a strong function of depth);
the wind forcing is steady and has been blowing for a long time;
barotropic conditions with no geostrophic flow;
the Coriolis parameter, is kept constant.
The simplified equations for the Coriolis force in the x and y directions follow from these assumptions:
where is the wind stress, is the density, is the east–west velocity, and is the north–south velocity.
Integrating each equation over the entire Ekman layer:
where
Here and represent the zonal and meridional mass transport terms with units of mass per unit time per unit length. Contrarily to common logic, north–south winds cause mass transport in the east–west direction.
In order to understand the vertical velocity structure of the water column, equations and can be rewritten in terms of the vertical eddy viscosity term.
where is the vertical eddy viscosity coefficient.
This gives a set of differential equations of the form
In order to solve this system of two differential equations, two boundary conditions can be applied:
as
friction is equal to wind stress at the free surface ().
Things can be further simplified by considering wind blowing in the y-direction only. This means is the results will be relative to a north–south wind (although these solutions could be produced relative to wind in any other direction):
where
and represent Ekman transport in the u and v direction;
in equation the plus sign applies to the northern hemisphere and the minus sign to the southern hemisphere;
is the wind stress on the sea surface;
is the Ekman depth (depth of Ekman layer).
By solving this at z=0, the surface current is found to be (as expected) 45 degrees to the right (left) of the wind in the Northern (Southern) Hemisphere. This also gives the expected shape of the Ekman spiral, both in magnitude and direction. Integrating these equations over the Ekman layer shows that the net Ekman transport term is 90 degrees to the right (left) of the wind in the Northern (Southern) Hemisphere.
Applications
Ekman transport leads to coastal upwelling, which provides the nutrient supply for some of the largest fishing markets on the planet and can impact the stability of the Antarctic Ice Sheet by pulling warm deep water onto the continental shelf. Wind in these regimes blows parallel to the coast (such as along the coast of Peru, where the wind blows out of the southeast, and also in California, where it blows out of the northwest). From Ekman transport, surface water has a net movement of 90° to right of wind direction in the northern hemisphere (left in the southern hemisphere). Because the surface water flows away from the coast, the water must be replaced with water from below. In shallow coastal waters, the Ekman spiral is normally not fully formed and the wind events that cause upwelling episodes are typically rather short. This leads to many variations in the extent of upwelling, but the ideas are still generally applicable.
Ekman transport is similarly at work in equatorial upwelling, where, in both hemispheres, a trade wind component towards the west causes a net transport of water towards the pole, and a trade wind component towards the east causes a net transport of water away from the pole.
On smaller scales, cyclonic winds induce Ekman transport which causes net divergence and upwelling, or Ekman suction, while anti-cyclonic winds cause net convergence and downwelling, or Ekman pumping
Ekman transport is also a factor in the circulation of the ocean gyres and garbage patches. Ekman transport causes water to flow toward the center of the gyre in all locations, creating a sloped sea-surface, and initiating geostrophic flow (Colling p 65). Harald Sverdrup applied Ekman transport while including pressure gradient forces to develop a theory for this (see Sverdrup balance).
Exceptions
The Ekman theory describing wind-induced current on a rotating planet explains why surface currents in the Northern Hemisphere are generally deflected to the left of wind direction, and in the Southern Hemisphere to the left in most cases. There are also solutions for opposite deflections at periods shorter than the local inertial period, which were not mentioned by Ekman, and are seldom observed. A major example of this effect occurs in the Bay of Bengal, where surface flow is offset to the left of wind direction in the Northern Hemisphere. Ekman's theory can be refined to include this case.
See also
Notes
References
Colling, A., Ocean Circulation, Open University Course Team. Second Edition. 2001.
Emerson, Steven R.; Hedges, John I. (2017). Chemical Oceanography and the Marine Carbon Cycle. New York, United States of America: Cambridge University Press. .
Knauss, J.A., Introduction to Physical Oceanography, Waveland Press. Second Edition. 2005.
Lindstrom, Eric J. "Ocean Motion : Definition : Wind Driven Surface Currents - Upwelling and Downwelling". oceanmotion.org.
Mann, K.H. and Lazier J.R., Dynamics of Marine Ecosystems, Blackwell Publishing. Third Edition. 2006.
Miller, Charles B.; Wheeler, Patricia A. Biological Oceanography (Second ed.). Wiley-Blackwell. .
Pond, S. and Pickard, G. L., Introductory Dynamical Oceanography, Pergamon Press. Second edition. 1983.
Sarmiento, Jorge L.; Gruber, Nicolas (2006). Ocean biogeochemical dynamics. Princeton University Press. .
Sverdrup, K.A., Duxbury, A.C., Duxbury, A.B., An Introduction to The World's Oceans, McGraw-Hill. Eighth Edition. 2005.
External links
What is Ekman transport ?
Aquatic ecology
Oceanography
Fluid dynamics
Science of underwater diving
Transport phenomena | Ekman transport | [
"Physics",
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 2,799 | [
"Transport phenomena",
"Physical phenomena",
"Hydrology",
"Applied and interdisciplinary physics",
"Oceanography",
"Chemical engineering",
"Ecosystems",
"Piping",
"Aquatic ecology",
"Fluid dynamics"
] |
5,634,407 | https://en.wikipedia.org/wiki/Shape%20correction%20function | The shape correction function is a ratio of the surface area of a growing organism and that of an isomorph as function of the volume. The shape of the isomorph is taken to be equal to that of the organism for a given reference volume, so for that particular volume the surface areas are also equal and the shape correction function has value one.
For a volume and reference volume , the shape correction function equals:
V0-morphs:
V1-morphs:
Isomorphs:
Static mixtures between a V0 and a V1-morph can be found as: for
The shape correction function is used in Dynamic Energy Budget theory to correct equations for isomorphs to organisms that change shape during growth. The conversion is necessary for accurately modelling food (substrate) acquisition and mobilization of reserve for use by metabolism.
References
Developmental biology
Metabolism | Shape correction function | [
"Chemistry",
"Biology"
] | 177 | [
"Behavior",
"Developmental biology",
"Reproduction",
"Cellular processes",
"Biochemistry",
"Metabolism"
] |
5,634,526 | https://en.wikipedia.org/wiki/Panning%20%28audio%29 | Panning is the distribution of an audio signal (either monaural or stereophonic pairs) into a new stereo or multi-channel sound field determined by a pan control setting. A typical physical recording console has a pan control for each incoming source channel. A pan control or pan pot (short for "panning potentiometer") is an analog control with a position indicator that can range continuously from the 7 o'clock when fully left to the 5 o'clock position fully right. Audio mixing software replaces pan pots with on-screen virtual knobs or sliders which function like their physical counterparts.
Overview
A pan pot has an internal architecture that determines how much of a source signal is sent to the left and right buses. "Pan pots split audio signals into left and right channels, each equipped with its own discrete gain (volume) control." This signal distribution is often called a taper or law.
When centered (at 12 o'clock), the law can be designed to send −3, −4.5 or −6 decibels (dB) equally to each bus. "Signal passes through both the channels at an equal volume while the pan pot points directly north." If the two output buses are later recombined into a monaural signal, then a pan law of −6 dB is desirable. If the two output buses are to remain stereo then a law of −3 dB is desirable. A law of −4.5 dB at center is a compromise between the two. A pan control fully rotated to one side results in the source being sent at full strength (0 dB) to one bus (either the left or right channel) and zero strength (− dB) to the other. Regardless of the pan setting, the overall sound power level remains (or appears to remain) constant. Because of the phantom center phenomenon, sound panned to the center position is perceived as coming from between the left and right speakers, but not in the center unless listened to with headphones, because of head-related transfer function HRTF.
Panning in audio borrows its name from panning action in moving image technology. An audio pan pot can be used in a mix to create the impression that a source is moving from one side of the soundstage to the other, although ideally there would be timing (including phase and Doppler effects), filtering and reverberation differences present for a more complete picture of apparent movement within a defined space. Simple analog pan controls only change relative level; they don't add reverb to replace direct signal, phase changes, modify the spectrum, or change delay timing. "Tracks thus seem to move in the direction that [one] point[s] the pan pots on a mixer, even though [one] actually attenuate[s] those tracks on the opposite side of the horizontal plane."
Panning can also be used in an audio mixer to reduce or reverse the stereo width of a stereo signal. For instance, the left and right channels of a stereo source can be panned straight up, which is sent equally to both the left output and the right output of the mixer, creating a dual mono signal.
An early panning process was used in the development of Fantasound, an early pioneering stereophonic sound reproduction system for Fantasia (1940).
Stereo-switching
Before pan pots were available, "a three-way switch was used to assign the track to the left output, right output, or both (the center)". Ubiquitous in the Billboard charts throughout the middle and late 1960s, clear examples include the Beatles's "Strawberry Fields Forever" and Jimi Hendrix's "Purple Haze", Stevie Wonder's "Living for the City". In the Beatles's "A Day In The Life" Lennon's vocals are switched to the extreme right on the first two strophes, on the third strophe they are switched center then extreme left, and switched left on the final strophe while during the bridge McCartney's vocals are switched extreme right.
See also
Pan law
Balance
References
Further reading
Rumsey, Francis and McCormick, Tim (2002). Sound and Recording: An Introduction. Focal Press.
Stereophonic sound
Audio mixing
ja:パン (撮影技法) | Panning (audio) | [
"Engineering"
] | 870 | [
"Audio engineering",
"Stereophonic sound"
] |
5,634,651 | https://en.wikipedia.org/wiki/Vapor-compression%20evaporation | Vapor-compression evaporation is the evaporation method by which a blower, compressor or jet ejector is used to compress, and thus, increase the pressure of the vapor produced. Since the pressure increase of the vapor also generates an increase in the condensation temperature, the same vapor can serve as the heating medium for its "mother" liquid or solution being concentrated, from which the vapor was generated to begin with. If no compression was provided, the vapor would be at the same temperature as the boiling liquid/solution, and no heat transfer could take place.
It is also sometimes called vapor compression distillation (VCD). If compression is performed by a mechanically driven compressor or blower, this evaporation process is usually referred to as MVR (mechanical vapor recompression). In case of compression performed by high pressure motive steam ejectors, the process is usually called thermocompression, steam compression or ejectocompression.
MVR process
Energy input
In this case the energy input to the system lies in the pumping energy of the compressor. The theoretical energy consumption will be equal to
, where
E is the total theoretical pumping energy
Q is the mass of vapors passing through the compressor
H1, H2 are the total heat content of unit mass of vapors, respectively upstream and downstream the compressor.
In SI units, these are respectively measured in kJ, kg and kJ/kg.
The actual energy input will be greater than the theoretical value and will depend on the efficiency of the system, which is usually between 30% and 60%. For example, suppose the theoretical energy input is 300 kJ and the efficiency is 30%. The actual energy input would be 300 x 100/30 = 1,000 kJ.
In a large unit, the compression power is between 35 and 45 kW per metric ton of compressed vapors.
Equipment for MVR evaporators
The compressor is necessarily the core of the unit. Compressors used for this application are usually of the centrifugal type, or positive displacement units such as the Roots blowers, similar to the (much smaller) Roots type supercharger. Very large units (evaporation capacity 100 metric tons per hour or more) sometimes use Axial-flow compressors. The compression work will deliver the steam superheated if compared to the theoretical pressure/temperature equilibrium. For this reason, the vast majority of MVR units feature a desuperheater between the compressor and the main heat exchanger.
Thermocompression
Energy input
The energy input is here given by the energy of a quantity of steam (motive steam), at a pressure higher than those of both the inlet and the outlet vapors.
The quantity of compressed vapors is therefore higher than the inlet :
Where Qd is the steam quantity at ejector delivery, Qs at ejector suction and Qm is the motive steam quantity. For this reason, a thermocompression evaporator often features a vapor condenser, due to the possible excess of steam necessary for the compression if compared with the steam required to evaporate the solution.
The quantity Qm of motive steam per unit suction quantity is a function of both the motive ratio of motive steam pressure vs. suction pressure and the compression ratio of delivery pressure vs. suction pressure. In principle, the higher the compression ratio and the lower the motive ratio the higher will be the specific motive steam consumption, i. e. the less efficient the energy balance.
Thermocompression equipment
The heart of any thermocompression evaporator is clearly the steam ejector, exhaustively described in the relevant page. The size of the other pieces of equipment, such as the main heat exchanger, the vapor head, etc. (see evaporator for details), is governed by the evaporation process.
Comparison
These two compression-type evaporators have different fields of application, although they do sometimes overlap.
An MVR unit will be preferable for a large unit, thanks to the reduced energy consumption. The largest single body MVR evaporator built (1968, by Whiting Co., later Swenson Evaporator Co., Harvey, Ill. in Cirò Marina, Italy) was a salt crystallizer, evaporating approximately 400 metric tons per hour of water, featuring an axial-flow compressor (Brown Boveri, later ABB). This unit was transformed around 1990 to become the first effect of a multiple effect evaporator. MVR evaporators with 10 tons or more evaporating capacity are common.
The compression ratio in a MVR unit does not usually exceed 1.8. At a compression ratio of 1.8, if the evaporation is performed at atmospheric pressure (0.101 MPa), the condensation pressure after compression will be 0.101 x 1.8 = 0.1818 [MPa]. At this pressure, the condensation temperature of the water vapor at the heat exchanger will be 390 K. Taking into account the boiling point elevation of the salt water we wish to evaporate (8 K for a saturated salt solution), this leaves a temperature difference of less than 8 K at the heat exchanger. A small ∆T leads to slow heat transfer, meaning that we will need a very large heating surface to transfer the required heat. Axial-flow and Roots compressor may reach slightly higher compression ratios.
Thermocompression evaporators may reach higher compression ratios - at a cost. A compression ratio of 2 is possible (and sometimes more) but unless the motive steam is at a reasonably high pressure (say, 16 bar g - 250 psig - or more), the motive steam consumption will be in the range of 2 kg per kg of suction vapors. A higher compression ratio means a smaller heat exchanger, and a reduced investment cost. Moreover, a compressor is an expensive machine, while an ejector is much simpler and cheap.
As a conclusion, MVR machines are used in large, energy-efficient units, while thermocompression units tend to limit their use to small units, where energy consumption is not a big issue.
Efficiency
The efficiency and feasibility of this process depends on the efficiency of the compressing device (e.g., blower, compressor or steam ejector) and the heat transfer coefficient attained in the heat exchanger contacting the condensing vapor and the boiling "mother" solution/liquid. Theoretically, if the resulting condensate is subcooled, this process could allow full recovery of the latent heat of vaporization that would otherwise be lost if the vapor, rather than the condensate, was the final product; therefore, this method of evaporation is very energy efficient. The evaporation process may be solely driven by the mechanical work provided by the compressing device.
Some uses
Clean water production (Water for injection)
A vapor-compression evaporator, like most evaporators, can make reasonably clean water from any water source. In a salt crystallizer, for example, a typical analysis of the resulting condensate shows a typical content of residual salt not higher than 50 ppm or, in terms of electrical conductance, not higher than 10 μS/cm. This results in a drinkable water, if the other sanitary requirements are fulfilled. While this cannot compete in the marketplace with reverse osmosis or demineralization, vapor compression chiefly differs from these thanks to its ability to make clean water from saturated or even crystallizing brines with total dissolved solids (TDS) up to 650 g/L. The other two technologies can make clean water from sources no higher in TDS than approximately 35 g/L.
For economic reasons evaporators are seldom operated on low-TDS water sources. Those applications are filled by reverse osmosis. The already brackish water which enters a typical evaporator is concentrated further. The increased dissolved solids act to increase the boiling point well beyond that of pure water. Seawater with a TDS of approximately 30 g/L exhibits a boiling point elevation of less than 1 K but saturated sodium chloride solution at 360 g/L has a boiling point elevation of about 7 K. This boiling point elevation is a challenge for vapor-compression evaporation in that it increases the pressure ratio that the steam compressor must attain to effect vaporization. Since boiling point elevation determines the pressure ratio in the compressor, it is the main overall factor in operating costs.
Steam-assisted gravity drainage
The technology used today to extract bitumen from the Athabasca oil sands is the water-intensive steam-assisted gravity drainage (SAGD) method. In the late 1990s former nuclear engineer Bill Heins of General Electric Company's RCC Thermal Products conceived an evaporator technology called falling film or mechanical vapor compression evaporation. In 1999 and 2002 Petro-Canada's MacKay River facility was the first to install 1999 and 2002 GE SAGD zero-liquid discharge (ZLD) systems using a combination of the new evaporative technology and crystallizer system in which all the water was recycled and only solids were discharged off site. This new evaporative technology began to replace older water treatment techniques employed by SAGD facilities which involved the use of warm lime softening to remove silica and magnesium and weak acid cation ion exchange used to remove calcium. The vapor-compression evaporation process replaced the once-through steam generators (OTSG) traditionally used for steam production. OTSG generally ran on natural gas which in 2008 had become increasingly valuable. The water quality of evaporators is four times better which is needed for the drum boilers. The evaporators, when coupled with standard drum boilers, produce steam which is more "reliable, less costly to operate, and less water-intensive." By 2008 about 85 per cent of SAGD facilities in the Alberta oil sands had adopted evaporative technology. "SAGD, unlike other thermal processes such as cyclic steam stimulation (CSS), requires 100 per cent quality steam."
See also
Cristiani compressed steam system
Slingshot (water vapor distillation system)
Vapor-compression refrigeration
Vapor-compression desalination
References
Evaporators
Chemical processes
Unit operations
Water treatment
Water technology | Vapor-compression evaporation | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 2,131 | [
"Unit operations",
"Water treatment",
"Chemical equipment",
"Water pollution",
"Chemical processes",
"Environmental engineering",
"Distillation",
"Evaporators",
"nan",
"Water technology",
"Chemical process engineering"
] |
5,635,076 | https://en.wikipedia.org/wiki/Syntrophy | In biology, syntrophy, syntrophism, or cross-feeding (from Greek syn meaning together, trophe meaning nourishment) is the cooperative interaction between at least two microbial species to degrade a single substrate. This type of biological interaction typically involves the transfer of one or more metabolic intermediates between two or more metabolically diverse microbial species living in close proximity to each other. Thus, syntrophy can be considered an obligatory interdependency and a mutualistic metabolism between different microbial species, wherein the growth of one partner depends on the nutrients, growth factors, or substrates provided by the other(s).
Microbial syntrophy
Syntrophy is often used synonymously for mutualistic symbiosis especially between at least two different bacterial species. Syntrophy differs from symbiosis in a way that syntrophic relationship is primarily based on closely linked metabolic interactions to maintain thermodynamically favorable lifestyle in a given environment. Syntrophy plays an important role in a large number of microbial processes especially in oxygen limited environments, methanogenic environments and anaerobic systems. In anoxic or methanogenic environments such as wetlands, swamps, paddy fields, landfills, digestive tract of ruminants, and anerobic digesters syntrophy is employed to overcome the energy constraints as the reactions in these environments proceed close to thermodynamic equilibrium.
Mechanism of microbial syntrophy
The main mechanism of syntrophy is removing the metabolic end products of one species so as to create an energetically favorable environment for another species. This obligate metabolic cooperation is required to facilitate the degradation of complex organic substrates under anaerobic conditions. Complex organic compounds such as ethanol, propionate, butyrate, and lactate cannot be directly used as substrates for methanogenesis by methanogens. On the other hand, fermentation of these organic compounds cannot occur in fermenting microorganisms unless the hydrogen concentration is reduced to a low level by the methanogens. The key mechanism that ensures the success of syntrophy is interspecies electron transfer. The interspecies electron transfer can be carried out via three ways: interspecies hydrogen transfer, interspecies formate transfer and interspecies direct electron transfer. Reverse electron transport is prominent in syntrophic metabolism.
The metabolic reactions and the energy involved for syntrophic degradation with H2 consumption:
A classical syntrophic relationship can be illustrated by the activity of ‘Methanobacillus omelianskii’. It was isolated several times from anaerobic sediments and sewage sludge and was regarded as a pure culture of an anaerobe converting ethanol to acetate and methane. In fact, however, the culture turned out to consist of a methanogenic archaeon "organism M.o.H" and a Gram-negative Bacterium "Organism S" which involves the oxidization of ethanol into acetate and methane mediated by interspecies hydrogen transfer. Individuals of organism S are observed as obligate anaerobic bacteria that use ethanol as an electron donor, whereas M.o.H are methanogens that oxidize hydrogen gas to produce methane.
Organism S: 2 Ethanol + 2 H2O → 2 Acetate− + 2 H+ + 4 H2 (ΔG°' = +9.6 kJ per reaction)
Strain M.o.H.: 4 H2 + CO2 → Methane + 2 H2O (ΔG°' = -131 kJ per reaction)
Co-culture:2 Ethanol + CO2 → 2 Acetate− + 2 H+ + Methane (ΔG°' = -113 kJ per reaction)
The oxidization of ethanol by organism S is made possible thanks to the methanogen M.o.H, which consumes the hydrogen produced by organism S, by turning the positive Gibbs free energy into negative Gibbs free energy. This situation favors growth of organism S and also provides energy for methanogens by consuming hydrogen. Down the line, acetate accumulation is also prevented by similar syntrophic relationship. Syntrophic degradation of substrates like butyrate and benzoate can also happen without hydrogen consumption.
An example of propionate and butyrate degradation with interspecies formate transfer carried out by the mutual system of Syntrophomonas wolfei and Methanobacterium formicicum:
Propionate+2H2O+2CO2 → Acetate− +3Formate− +3H+ (ΔG°'=+65.3 kJ/mol)
Butyrate+2H2O+2CO2 → 2Acetate- +3Formate- +3H+ ΔG°'=+38.5 kJ/mol)
Direct interspecies electron transfer (DIET) which involves electron transfer without any electron carrier such as H2 or formate was reported in the co-culture system of Geobacter mettalireducens and Methanosaeto or Methanosarcina
Examples
In ruminants
The defining feature of ruminants, such as cows and goats, is a stomach called a rumen. The rumen contains billions of microbes, many of which are syntrophic. Some anaerobic fermenting microbes in the rumen (and other gastrointestinal tracts) are capable of degrading organic matter to short chain fatty acids, and hydrogen. The accumulating hydrogen inhibits the microbe's ability to continue degrading organic matter, but the presence of syntrophic hydrogen-consuming microbes allows continued growth by metabolizing the waste products. In addition, fermentative bacteria gain maximum energy yield when protons are used as electron acceptor with concurrent H2 production. Hydrogen-consuming organisms include methanogens, sulfate-reducers, acetogens, and others.
Some fermentation products, such as fatty acids longer than two carbon atoms, alcohols longer than one carbon atom, and branched chain and aromatic fatty acids, cannot directly be used in methanogenesis. In acetogenesis processes, these products are oxidized to acetate and H2 by obligated proton reducing bacteria in syntrophic relationship with methanogenic archaea as low H2 partial pressure is essential for acetogenic reactions to be thermodynamically favorable (ΔG < 0).
Biodegradation of pollutants
Syntrophic microbial food webs play an integral role in bioremediation especially in environments contaminated with crude oil and petrol. Environmental contamination with oil is of high ecological importance and can be effectively mediated through syntrophic degradation by complete mineralization of alkane, aliphatic and hydrocarbon chains. The hydrocarbons of the oil are broken down after activation by fumarate, a chemical compound that is regenerated by other microorganisms. Without regeneration, the microbes degrading the oil would eventually run out of fumarate and the process would cease. This breakdown is crucial in the processes of bioremediation and global carbon cycling.
Syntrophic microbial communities are key players in the breakdown of aromatic compounds, which are common pollutants. The degradation of aromatic benzoate to methane produces intermediate compounds such as formate, acetate, and H2. The buildup of these products makes benzoate degradation thermodynamically unfavorable. These intermediates can be metabolized syntrophically by methanogens and makes the degradation process thermodynamically favorable
Degradation of amino acids
Studies have shown that bacterial degradation of amino acids can be significantly enhanced through the process of syntrophy. Microbes growing poorly on amino acid substrates alanine, aspartate, serine, leucine, valine, and glycine can have their rate of growth dramatically increased by syntrophic H2 scavengers. These scavengers, like Methanospirillum and Acetobacterium, metabolize the H2 waste produced during amino acid breakdown, preventing a toxic build-up. Another way to improve amino acid breakdown is through interspecies electron transfer mediated by formate. Species like Desulfovibrio employ this method. Amino acid fermenting anaerobes such as Clostridium species, Peptostreptococcus asacchaarolyticus, Acidaminococcus fermentans were known to breakdown amino acids like glutamate with the help of hydrogen scavenging methanogenic partners without going through the usual Stickland fermentation pathway
Anaerobic digestion
Effective syntrophic cooperation between propionate oxidizing bacteria, acetate oxidizing bacteria and H2/acetate consuming methanogens is necessary to successfully carryout anaerobic digestion to produce biomethane
Examples of syntrophic organisms
Syntrophomonas wolfei
Syntrophobacter funaroxidans
Pelotomaculum thermopropinicium
Syntrophus aciditrophicus
Syntrophus buswellii
Syntrophus gentianae
References
Biological interactions
Food chains | Syntrophy | [
"Biology"
] | 1,885 | [
"Biological interactions",
"Ethology",
"Behavior",
"nan"
] |
5,635,437 | https://en.wikipedia.org/wiki/Bone%20morphogenetic%20protein%2015 | Bone morphogenetic protein 15 (BMP-15) is a protein that in humans is encoded by the BMP15 gene. It is involved in folliculogenesis, the process in which primordial follicles develop into pre-ovulatory follicles.
Structure & Interactions
Structure
The BMP-15 gene is located on the X-chromosome and using Northern blot analysis BMP-15 mRNA is locally expressed within the ovaries in oocytes only after they have started to undergo the primary stages of development. BMP-15 is translated as a preproprotein that is composed of a single peptide, which contains a proregion and a smaller mature region. Intracellular processing then leads to the removal of the proregion, leaving the biologically active mature region to perform the functions. This protein is a member of the Transforming growth factor beta (TGF-β) superfamily and is a paracrine signalling molecule. Most active BMPs have a common structure, in which they contain 7 cysteines, 6 of which form three intramolecular disulphide bonds and the seventh being involved in the formation of dimers with other monomers. BMP-15 is an exception to this as the molecule does not contain the seventh cysteine. Instead in BMP-15 the fourth cysteine is replaced by a serine.
Interactions
BMP-15 and GDF9 interact with each other and work synergistically to have similar interactions with the target cell. BMP15 can act as a heterodimer with GDF9 or on its own as a homodimer. In most of the BMP family heterodimers and homodimers form as the seventh cysteine is involved in the formation of a covalent bond, leading the dimerization. However, in the BMP-15 the homodimers form as a non-covalent bond is present between two BMP-15 subunits.
Function
Functions of BMP-15 include
Promotion of growth and maturation of ovarian follicles, starting from the primary gonadotrophin-independent phases of folliculogenesis.
Regulation of the sensitivity of granulosa cells to follicle-stimulating hormone (FSH) action, contributing to the determination of the number of eggs that are ovulated.
Prevention of granulosa cell apoptosis.
Folliculogenesis
Folliculogenesis is an important process for the development and maintenance of fertility. Primordial follicles are stored in the ovary and throughout life are activated to go through morphological changes to become preovulatory follicles ready for ovulation, when the oocyte is released into the fallopian tube of the female reproductive tract.
BMP-15 main functions are crucial for the beginning of folliculogenesis as seen in Image 1. The primordial follicle is made up of the oocyte and a single layer of flattened granulosa cells. BMP-15 is released from the oocyte into the surrounding granulosa tissue where it binds to two membrane bound receptors on granulosa cells. This promotes granulosa cell proliferation via mitosis. BMP-15 promotes the change of primordial to primary and secondary follicles which are surrounded by several granulosa cell layers but doesn't promote transition into preovulatory follicles. BMP-15 prevents differentiation into preovulatory follicle by inhibiting FSH action in granulosa. FSH is released by the anterior pituitary as part of the hypothalamic-pituitary-gonadal axis and promotes the differentiation of early follicles into later preovulatory ones. BMP-15 prevents this transition by inhibiting the production of FSH receptor mRNA in granulosa cells. Therefore, FSH cannot bind to the granulosa cells, this inhibits FSH dependent progesterone production and luteinization, subsequently granulosa cells do not differentiate.
As BMP-15 acts directly on granulosa cells it has an important influence on granulosa function including steroidogenesis inhibition of luteinization and differentiation of cumulus, without which would lead to infertility and lack of folliculogenesis.
Differences between species
The use of mammalian species other than human is often used in research to learn more about human biology.
Sheep
Two breeds of sheep, Inverdale and Hanna, are naturally heterozygous carriers of point mutations in the BMP-15 gene. These point mutations result in higher ovulation rates and larger litter sizes than sheep strains with a wildtype BMP-15 genotype. This super-fertility was mimicked later through immunization of wildtype ewes against BMP-15 using various immunisation techniques. Sheep carrying homozygous alleles for the Inverdale and Hanna BMP-15 mutations are infertile, as they have streak ovaries and the primary stage of folliculogenesis is inhibited. These studies suggest that BMP-15 plays a vital role in the normal regulation of folliculogenesis and ovulation in sheep.
Mice
In mice, the BMP-15 homologue is not as physiologically important. Upon targeted deletion of a bmp15 exon, the mice presented with only subfertility in homozygotes and no clear aberrant phenotype in heterozygotes. The homozygous mutant mice did not suffer from reduced folliculogenesis or impacted follicle progression, unlike in the sheep homologue knockout experiments. The subfertility seen in the homozygous mutant phenotype was attributed to defective ovulation and reduced viability of embryos. Here it can be stated that BMP-15 is not as vital for normal female mouse fertility as it is for sheep.
Humans
Humans display a similar phenotype to the Inverdale/Hanna sheep in regards to female fertility. In women, a mutation in BMP-15 is linked to hypergonadotropic ovarian failure due to ovarian dysgenesis. In this case, the researchers were able to identify that the father of the two sisters displaying this mutation had no documented phenotype associated with the mutation, so BMP-15 appears to only affect females. In slight contrast to the reports on sheep, the women in this study were heterozygous for the BMP-15 mutation but exhibited streak ovaries, a phenotype very similar to the one seen in homozygous mutant ewes. The sisters presented with primary amenorrhea, showing that BMP-15 is also vital to normal human female fertility, concordant with the sheep model.
Current theory
The main theory for this stark difference between mammalian species relates to the number of follicles normally ovulated in each cycle by each species. Humans and sheep are mono-ovulatory, potentially explaining the difference in litter size observed in mutant individuals. As mice are poly-ovulatory, the role of BMP-15 in female mouse fertility may not be as obvious.
Clinical relevance
Mutations within the gene for BMP-15 have been associated with reproductive complications in females, due to the X-linked nature of the protein. Due to its role in folliculogenesis, mutations can lead to sub-fertility through decreased or absent folliculogenesis. In combination with GDF-9, mutant BMP-15 is also associated with ovulation defects, premature ovarian failure and other reproductive pathologies.
BMP-15 defects have been implicated in female sterility, Polycystic Ovary Syndrome (PCOS), primary ovarian insufficiency (POI) and endometriosis. Women with PCOS have been noted to have higher levels of BMP-15, while missense mutations of the protein have been identified in females with POI.
Research has also found inherited mutant BMP-15 to be involved with the pathogenesis of hypergonadotropic ovarian failure. This condition develops due to BMP-15 role in folliculogenesis, and the errors that occur when a mutant gene is inherited. The protein is linked to familial ovarian dysgenesis which results in hypergonadotropic ovarian failure.
The importance of BMP-15 in ovulation and folliculogenesis has been highlighted by research into Turner syndrome, a chromosomal abnormality where females are missing a complete or partial X chromosome. Depending on the chromosomal mutation, BMP-15 gene dosage varies and impacts ovarian development in Turner syndrome patients. The gene is thus involved in determining the extent of the ovarian defects present in Turner syndrome.
BMP-15 is also present in animals and involved in reproduction, such as in mice and sheep. Reduced levels of BMP-15 in sheep have shown to increase ovulation, leading to larger litter sizes.
References
External links
Developmental genes and proteins
Bone morphogenetic protein
TGFβ domain | Bone morphogenetic protein 15 | [
"Biology"
] | 1,892 | [
"Induced stem cells",
"Developmental genes and proteins"
] |
5,635,862 | https://en.wikipedia.org/wiki/Maintenance%20of%20an%20organism | Maintenance of an organism is the collection of processes to stay alive, excluding production processes. The Dynamic Energy Budget theory delineates two classes
Somatic maintenance mainly comprises the turnover of structural mass (mainly proteins) and the maintenance of concentration gradients of metabolites across membranes (e.g., counteracting leakage). This is related to maintenance respiration.
Maturity maintenance concerns the maintenance of defence systems (such as the immune system), and the preparation of the body for reproduction.
The theory assumes that maturity maintenance costs can be reduced more easily during starvation than somatic maintenance costs. Under extreme starvation conditions, somatic maintenance costs are paid from structural mass, which causes shrinking. Some organism manage to switch to the torpor state under starvation conditions, and reduce their maintenance costs.
Developmental biology | Maintenance of an organism | [
"Biology"
] | 163 | [
"Behavior",
"Developmental biology",
"Reproduction"
] |
5,635,929 | https://en.wikipedia.org/wiki/Tonnetz | In musical tuning and harmony, the (German for 'tone net') is a conceptual lattice diagram representing tonal space first described by Leonhard Euler in 1739. Various visual representations of the Tonnetz can be used to show traditional harmonic relationships in European classical music.
History through 1900
The Tonnetz originally appeared in Leonhard Euler's 1739 . Euler's Tonnetz, pictured at left, shows the triadic relationships of the perfect fifth and the major third: at the top of the image is the note F, and to the left underneath is C (a perfect fifth above F), and to the right is A (a major third above F). Gottfried Weber, , discusses the relationships between keys, presenting them in a network analogous to Euler's Tonnetz, but showing keys rather than notes. The Tonnetz itself was rediscovered in 1858 by Ernst Naumann in his ., and was disseminated in an 1866 treatise of Arthur von Oettingen. Oettingen and the influential musicologist Hugo Riemann (not to be confused with the mathematician Bernhard Riemann) explored the capacity of the space to chart harmonic modulation between chords and motion between keys. Similar understandings of the Tonnetz appeared in the work of many late-19th century German music theorists.
Oettingen and Riemann both conceived of the relationships in the chart being defined through just intonation, which uses pure intervals. One can extend out one of the horizontal rows of the Tonnetz indefinitely, to form a never-ending sequence of perfect fifths: F-C-G-D-A-E-B-F♯-C♯-G♯-D♯-A♯-E♯-B♯-F𝄪-C𝄪-G𝄪- (etc.) Starting with F, after 12 perfect fifths, one reaches E♯. Perfect fifths in just intonation are slightly larger than the compromised fifths used in equal temperament tuning systems more common in the present. This means that when one stacks 12 fifths starting from F, the E♯ we arrive at will not be seven octaves above the F we started with. Oettingen and Riemann's Tonnetz thus extended on infinitely in every direction without actually repeating any pitches. In the twentieth century, composer-theorists such as Ben Johnston and James Tenney continued to developed theories and applications involving just-intoned Tonnetze.
The appeal of the Tonnetz to 19th-century German theorists was that it allows spatial representations of tonal distance and tonal relationships. For example, looking at the dark blue A minor triad in the graphic at the beginning of the article, its parallel major triad (A-C♯-E) is the triangle right below, sharing the vertices A and E. The relative major of A minor, C major (C-E-G) is the upper-right adjacent triangle, sharing the C and the E vertices. The dominant triad of A minor, E major (E-G♯-B) is diagonally across the E vertex, and shares no other vertices. One important point is that every shared vertex between a pair of triangles is a shared pitch between chords - the more shared vertices, the more shared pitches the chord will have. This provides a visualization of the principle of parsimonious voice-leading, in which motions between chords are considered smoother when fewer pitches change. This principle is especially important in analyzing the music of late-19th century composers like Wagner, who frequently avoided traditional tonal relationships.
Twentieth-century reinterpretation
Neo-Riemannian music theorists David Lewin and Brian Hyer revived the Tonnetz to further explore properties of pitch structures. Modern music theorists generally construct the Tonnetz in equal temperament and without distinction between octave transpositions of a pitch (i.e., using pitch classes). Under equal temperament, the never-ending series of ascending fifths mentioned earlier becomes a cycle. Neo-Riemannian theorists typically assume enharmonic equivalence (in other words, A♭ = G♯), and so the two-dimensional plane of the 19th-century Tonnetz cycles in on itself in two different directions, and is mathematically isomorphic to a torus.
Neo-Riemannian theorists have also used the Tonnetz to visualize non-tonal triadic relationships. For example, the diagonal going up and to the left from C in the diagram at the beginning of the article forms a division of the octave in three major thirds: C-A♭-E-C (the E is actually an F♭, and the final C a D♭♭). Richard Cohn argues that while a sequence of triads built on these three pitches (C major, A♭ major, and E major) cannot be adequately described using traditional concepts of functional harmony, this cycle has smooth voice leading and other important group properties which can be easily observed on the Tonnetz.
Similarities to other graphical systems
The harmonic table note layout is a note layout that is topologically equivalent to the Tonnetz, and is used on several music instruments that allow playing major and minor chords with a single finger.
The Tonnetz can be overlayed on the Wicki–Hayden note layout, where the major second can be found half way the major third.
The Tonnetz is the dual graph of Schoenberg's chart of the regions, and of course vice versa. Research into music cognition has demonstrated that the human brain uses a "chart of the regions" to process tonal relationships.
See also
Fokker periodicity block
Neo-Riemannian theory
Musical set-theory
Riemannian theory
Transformational theory
Tuning theory
Treatise on Harmony
References
Further reading
Johnston, Ben (2006). "Rational Structure in Music", "Maximum Clarity" and Other Writings on Music, edited by Bob Gilmore. Urbana: University of Illinois Press. .
Wannamaker, Robert, The Music of James Tenney, Volume 1: Contexts and Paradigms (University of Illinois Press, 2021), 155-65.
External links
Music harmony and donuts by Paul Dysart
Charting Enharmonicism on the Just-Intonation Tonnetz by Robert T. Kelley
Midi-Instrument based on Tonnetz (Melodic Table) by The Shape of Music
Midi-Instrument based on Tonnetz (Harmonic Table) by C-Thru-Music
TonnetzViz (interactive visualization) by Ondřej Cífka; a modified version by Anton Salikhmetov
Diagrams
Lattice theory
Pitch space
Topology
Music theory | Tonnetz | [
"Physics",
"Mathematics"
] | 1,345 | [
"Lattice theory",
"Fields of abstract algebra",
"Topology",
"Space",
"Geometry",
"Spacetime",
"Order theory"
] |
5,636,177 | https://en.wikipedia.org/wiki/Dynamic%20reserve | Dynamic reserve, in the context of the dynamic energy budget theory, refers to the set of metabolites (mostly polymers and lipids) that an organism can use for metabolic purposes. These chemical compounds can have active metabolic functions, however. They are not just "set apart for later use." Reserve differs from structure in the first place by its dynamics. Reserve has an implied turnover, because it is synthesized from food (or other substrates in the environment) and used by metabolic processes occurring in cells. The turnover of structure depends on the maintenance of an organism. Maintenance is not required for reserve. A freshly laid egg consists almost exclusively of reserve, and hardly respires. The chemical compounds in the reserve have the same turnover, while that in the structure can have a different turnover, and so it depends on the compound.
Functionality
Reserves are synthesized from environmental substrates (food) for use by the metabolism for the purpose of somatic maintenance (including protein turnover, maintenance of concentration gradients across membranes, activity and other types of work), growth (increase of structural mass), maturity maintenance (installation of regulation systems, preparation for reproduction, maintenance of defense systems, such as the immune system), maturation (increase of the state of maturity) and reproduction. This organizational position of reserve creates a rather constant internal chemical environment, with only an indirect coupling with the extra-organismal environment. Reserves as well as structure are taken to be generalised compounds, i.e. mixtures of a large number of compounds, which do not change in composition. The latter requirement is called the strong homeostasis assumption. Polymers (carbohydrates, proteins, ribosomal RNA) and lipids form the main bulk of reserves and of structure.
Some reasons for including reserve are to give an explanation for (from ):
the metabolic memory; changes in food (substrate) availability affect production (growth or reproduction) with some delay. Growth continues for some time during starvation; embryo development is fueled by reserves
the composition of biomass depends on growth rate. With two components (reserves and structure) particular changes in composition can be captured. More complex changes require several reserves, as is required for autotrophs.
the body size scaling of life history parameters. The specific respiration rate decreases with (maximum) body size between species because large bodied species have relatively more reserve. Many other life history parameters directly or indirectly relate to respiration.
the observed respiration patterns, which reflect the use of energy. Freshly laid eggs hardly respire, but their respiratory rates increase during development while egg weight decreases. After hatching, however, the respiration rate further increases, while the weight now also increases
all mass fluxes are linear combinations of assimilation, dissipation and growth. If reserves are omitted, there is not enough flexibility to capture product formation and explain indirect calorimetry.
References
Metabolism | Dynamic reserve | [
"Chemistry",
"Biology"
] | 581 | [
"Biochemistry",
"Metabolism",
"Cellular processes"
] |
5,636,308 | https://en.wikipedia.org/wiki/Horizontal%20situation%20indicator | The horizontal situation indicator (commonly called the HSI) is an aircraft flight instrument normally mounted below the artificial horizon in place of a conventional heading indicator. It combines a heading indicator with a VHF omnidirectional range-instrument landing system (VOR-ILS) display.
Advantage
The HSI can reduce pilot workload by lessening the number of elements in the pilot's instrument scan to the six basic flight instruments. Among other advantages, the HSI offers freedom from the confusion of reverse sensing on an instrument landing system localizer back course approach. As long as the needle is set to the localizer front course, the instrument will indicate whether to fly left or right, in either direction of travel.
Display
On the HSI, the aircraft is represented by a schematic figure in the centre of the instrument – the VOR-ILS display is shown in relation to this figure. The heading indicator is usually slaved to a remote compass and the HSI is frequently interconnected with an autopilot capable of following the heading select bug and of executing an ILS approach by following the localizer and glide slope.
On a conventional VOR indicator, left–right and to–from must be interpreted in the context of the selected course. When an HSI is tuned to a VOR station, left and right always mean left and right and TO/FROM is indicated by a simple triangular arrowhead pointing to the VOR. If the arrowhead points to the same side as the course selector arrow, it means TO, and if it points behind to the side opposite the course selector, it means FROM. The HSI illustrated here is a type designed for smaller airplanes and is the size of a standard 3 ¼-inch instrument. Airline and jet aircraft HSIs are larger and may include more display elements.
The most modern HSI displays are electronic and often integrated with electronic flight instrument systems into so-called "glass cockpit" systems.
Remote indicating compass
HSI is part of a remote indicating compass system, which was developed to compensate for the errors and limitations of the older type of heading indicators. The two panel-mounted components of a typical system include the HSI and a slaving control and compensator unit, which pilots can set to auto-correct the gyro error using readings from a remotely mounted magnetic slaving transmitter when the system is set to "slave gyro" mode. In a "free gyro" mode, pilots have to manually adjust their HSI.
See also
Acronyms and abbreviations in avionics
Flight instruments
Radio magnetic indicator
Notes
Avionics
Navigational flight instruments
Radio navigation | Horizontal situation indicator | [
"Technology"
] | 539 | [
"Avionics",
"Aircraft instruments",
"Navigational flight instruments"
] |
5,636,358 | https://en.wikipedia.org/wiki/Sequencing%20batch%20reactor | Sequencing batch reactors (SBR) or sequential batch reactors are a type of activated sludge process for the treatment of wastewater. SBRs treat wastewater such as sewage or output from anaerobic digesters or mechanical biological treatment facilities in batches. Oxygen is bubbled through the mixture of wastewater and activated sludge to reduce the organic matter (measured as biochemical oxygen demand (BOD) and chemical oxygen demand (COD)). The treated effluent may be suitable for discharge to surface waters or possibly for use on land.
Overview
While there are several configurations of SBRs, the basic process is similar. The installation consists of one or more tanks that can be operated as plug flow or completely mixed reactors. The tanks have a “flow through” system, with raw wastewater (influent) coming in at one end and treated water (effluent) flowing out the other. In systems with multiple tanks, while one tank is in settle/decant mode the other is aerating and filling. In some systems, tanks contain a section known as the bio-selector, which consists of a series of walls or baffles which direct the flow either from side to side of the tank or under and over consecutive baffles. This helps to mix the incoming influent and the returned activated sludge (RAS), beginning the biological digestion process before the liquor enters the main part of the tank.
Treatment stages
There are five stages in the treatment process:
Fill
React
Settle
Decant
Idle
First, the inlet valve is opened and the tank is filled, while mixing is provided by mechanical means, but no air is added yet. This stage is also called the anoxic stage. During the second stage, aeration of the mixed liquor is performed by the use of fixed or floating mechanical pumps or by transferring air into fine bubble diffusers fixed to the floor of the tank. No aeration or mixing is provided in the third stage and the settling of suspended solids starts. During the fourth stage the outlet valve opens and the "clean" supernatant liquor exits the tank.
Removal of constituents
Aeration times vary according to the plant size and the composition/quantity of the incoming liquor, but are typically 60 to 90 minutes. The addition of oxygen to the liquor encourages the multiplication of aerobic bacteria and they consume the nutrients. This process encourages the conversion of nitrogen from its reduced ammonia form to oxidized nitrite and nitrate forms, a process known as nitrification.
To remove phosphorus compounds from the liquor, aluminium sulfate (alum) is often added during this period. It reacts to form non-soluble compounds, which settle into the sludge in the next stage.
The settling stage is usually the same length in time as the aeration. During this stage the sludge formed by the bacteria is allowed to settle to the bottom of the tank. The aerobic bacteria continue to multiply until the dissolved oxygen is all but used up. Conditions in the tank, especially near the bottom are now more suitable for the anaerobic bacteria to flourish. Many of these, and some of the bacteria which would prefer an oxygen environment, now start to use oxidized nitrogen instead of oxygen gas (as an alternate terminal electron acceptor) and convert the nitrogen to a gaseous state, as nitrogen oxides or, ideally, molecular nitrogen (dinitrogen, N2) gas. This is known as denitrification.
An anoxic SBR can be used for anaerobic processes, such as the removal of ammonia via Anammox, or the study of slow-growing microorganisms. In this case, the reactors are purged of oxygen by flushing with inert gas and there is no aeration.
As the bacteria multiply and die, the sludge within the tank increases over time and a waste activated sludge (WAS) pump removes some of the sludge during the settling stage to a digester for further treatment. The quantity or “age” of sludge within the tank is closely monitored, as this can have a marked effect on the treatment process.
The sludge is allowed to settle until clear water is on the top 20 to 30 percent of the tank contents.
The decanting stage most commonly involves the slow lowering of a scoop or “trough” into the basin. This has a piped connection to a lagoon where the final effluent is stored for disposal to a wetland, tree plantation, ocean outfall, or to be further treated for use on parks, golf courses etc.
Conversion
In some situations in which a traditional treatment plant cannot fulfill required treatment (due to higher loading rates, stringent treatment requirements, etc.), the owner might opt to convert their traditional system into a multi-SBR plant. Conversion to SBR will create a longer sludge age, minimizing sludge handling requirements downstream of the SBR.
The reverse can also be done, in which SBR Systems would be converted into extended aeration (EA) systems. SBR treatment systems that cannot cope up with a sudden constant increase of influent may easily be converted into EA plants. Extended aeration plants are more flexible in flow rate, eliminating restrictions presented by pumps located throughout the SBR systems. Clarifiers can be retrofitted in the equalization tanks of the SBR.
See also
Aerobic granulation
Diffuser (sewage)
List of waste-water treatment technologies
Upflow anaerobic sludge blanket digestion
References
Environmental engineering
Sewerage | Sequencing batch reactor | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,123 | [
"Chemical engineering",
"Water pollution",
"Sewerage",
"Civil engineering",
"Environmental engineering"
] |
5,636,533 | https://en.wikipedia.org/wiki/Balloon-carried%20light%20effect | A balloon-carried light effect is a special effect carried by a balloon, which can be fixed with a rope to the ground or free-flying. They are commonly misidentified as "Unidentified Flying Objects" by members of public.
Uses
Balloon-carried light effects can be used without safety concerns at events with a lot of people, because unlike fireworks they do not require the use of flammable substances. The brightness is much lower, when non-dangerous light sources such as lightsticks or battery-powered lamps are used.
Balloon-carried light effects cannot replace fireworks, but supplement them, because of their long lighting time (if lightsticks or battery powered lamps are used) and because of their inherent safety.
To realise a balloon-carried light effect, one or more balloons are used, capable of carrying a lightstick or a small battery powered lamp. Further it is possible to insert one or two small lightsticks into a transparent balloon.
It is also possible for larger balloon-carried light effects to use tethered balloons, which also can contain an electric cable power supply: these 'artificial moons' may be used for floodlighting.
Open-air concert use
Balloon-carried light effects are sometimes used on open-air concerts and similar events. For film and event use, balloons are offered as suspended air-filled or tethered floating helium-filled. The latter for outdoors uses a 2-meter diameter balloon and 4 1000 Watt halogen tungsten lights inside, rising up about 10 m and withstanding only calm winds.
Other uses included fixed or pole balloons, spread like an umbrella, the upper half reflective coated, the lower half semi-opaque, light construction sites on highways, or accident sites in case of emergency.
References
Balloons | Balloon-carried light effect | [
"Chemistry"
] | 357 | [
"Balloons",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
5,636,766 | https://en.wikipedia.org/wiki/Fast-ion%20conductor | In materials science, fast ion conductors are solid conductors with highly mobile ions. These materials are important in the area of solid state ionics, and are also known as solid electrolytes and superionic conductors. These materials are useful in batteries and various sensors. Fast ion conductors are used primarily in solid oxide fuel cells. As solid electrolytes they allow the movement of ions without the need for a liquid or soft membrane separating the electrodes. The phenomenon relies on the hopping of ions through an otherwise rigid crystal structure.
Mechanism
Fast ion conductors are intermediate in nature between crystalline solids which possess a regular structure with immobile ions, and liquid electrolytes which have no regular structure and fully mobile ions. Solid electrolytes find use in all solid-state supercapacitors, batteries, and fuel cells, and in various kinds of chemical sensors.
Classification
In solid electrolytes (glasses or crystals), the ionic conductivity σi can be any value, but it should be much larger than the electronic one. Usually, solids where σi is on the order of 0.0001 to 0.1 Ω−1 cm−1 (300 K) are called superionic conductors.
Proton conductors
Proton conductors are a special class of solid electrolytes, where hydrogen ions act as charge carriers. One notable example is superionic water.
Superionic conductors
Superionic conductors where σi is more than 0.1 Ω−1 cm−1 (300 K) and the activation energy for ion transport Ei is small (about 0.1 eV), are called advanced superionic conductors. The most famous example of advanced superionic conductor-solid electrolyte is RbAg4I5 where σi > 0.25 Ω−1 cm−1 and σe ~10−9 Ω−1 cm−1 at 300 K. The Hall (drift) ionic mobility in RbAg4I5 is about 2 cm2/(V•s) at room temperatures. The σe – σi systematic diagram distinguishing the different types of solid-state ionic conductors is given in the figure.
No clear examples have been described as yet, of fast ion conductors in the hypothetical advanced superionic conductors class (areas 7 and 8 in the classification plot). However, in crystal structure of several superionic conductors, e.g. in the minerals of the pearceite-polybasite group, the large structural fragments with activation energy of ion transport Ei < kBT (300 К) had been discovered in 2006.
Examples
Zirconia-based materials
A common solid electrolyte is yttria-stabilized zirconia, YSZ. This material is prepared by doping Y2O3 into ZrO2. Oxide ions typically migrate only slowly in solid Y2O3 and in ZrO2, but in YSZ, the conductivity of oxide increases dramatically. These materials are used to allow oxygen to move through the solid in certain kinds of fuel cells. Zirconium dioxide can also be doped with calcium oxide to give an oxide conductor that is used in oxygen sensors in automobile controls. Upon doping only a few percent, the diffusion constant of oxide increases by a factor of ~1000.
Other conductive ceramics function as ion conductors. One example is NASICON, (Na3Zr2Si2PO12), a sodium super-ionic conductor
beta-Alumina
Another example of a popular fast ion conductor is beta-alumina solid electrolyte. Unlike the usual forms of alumina, this modification has a layered structure with open galleries separated by pillars. Sodium ions (Na+) migrate through this material readily since the oxide framework provides an ionophilic, non-reducible medium. This material is considered as the sodium ion conductor for the sodium–sulfur battery.
Fluoride ion conductors
Lanthanum trifluoride (LaF3) is conductive for F− ions, used in some ion selective electrodes. Beta-lead fluoride exhibits a continuous growth of conductivity on heating. This property was first discovered by Michael Faraday.
Iodides
A textbook example of a fast ion conductor is silver iodide (AgI). Upon heating the solid to 146 °C, this material adopts the alpha-polymorph. In this form, the iodide ions form a rigid cubic framework, and the Ag+ centers are molten. The electrical conductivity of the solid increases by 4000x. Similar behavior is observed for copper(I) iodide (CuI), rubidium silver iodide (RbAg4I5), and Ag2HgI4.
Other Inorganic materials
Silver sulfide, conductive for Ag+ ions, used in some ion selective electrodes
Lead(II) chloride, conductive for Cl− ions at higher temperatures
Some perovskite ceramics – strontium titanate, strontium stannate – conductive for O2− ions
Zr(HPO4)2.\mathit{n}H2O – conductive for H+ ions
UO2HPO4.4H2O (hydrogen uranyl phosphate tetrahydrate) – conductive for H+ ions
Cerium(IV) oxide – conductive for O2− ions
Organic materials
Many gels, such polyacrylamides, agar, etc. are fast ion conductors
A salt dissolved in a polymer – e.g. lithium perchlorate in polyethylene oxide
Polyelectrolytes and Ionomers – e.g. Nafion, a H+ conductor
History
The important case of fast ionic conduction is one in a surface space-charge layer of ionic crystals. Such conduction was first predicted by Kurt Lehovec.
As a space-charge layer has nanometer thickness, the effect is directly related to nanoionics (nanoionics-I). Lehovec's effect is used as a basis for developing nanomaterials for portable lithium batteries and fuel cells.
See also
Mixed conductor
References
Electric and magnetic fields in matter
Electrochemical concepts | Fast-ion conductor | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,247 | [
"Materials science",
"Electric and magnetic fields in matter",
"Electrochemical concepts",
"Electrochemistry",
"Condensed matter physics"
] |
5,636,887 | https://en.wikipedia.org/wiki/Fenethylline | Fenethylline (BAN, USAN) or fenetylline (INN) is a codrug of amphetamine and theophylline and so a mutual prodrug of both. It is also spelled phenethylline; other names for it are amphetaminoethyltheophylline and amfetyline. The drug was marketed for use as a psychostimulant under the brand names Captagon, Biocapton, and Fitton. The name "Captagon" is often used generically to describe illicitly produced fenethylline.
Fenethylline is now illegal in most countries; it is produced primarily for illicit use, which takes place mainly in the Middle East, often as a stimulant for gunmen. The illicit global market for the drug was estimated in 2023 to be worth approximately US$57 billion. Smuggling of Captagon became Syria's principal export, exceeding the total of all other exports under the Assad regime during the period from 2011 to 2024 of the Syrian Civil War in which it ruled Syria; it was considered to be the world's largest producer of the drug, accounting for about 80% of the global supply. A huge quantity of "Captagon", ready for clandestine export, was captured by anti-Assad forces that took control of Damascus in December 2024.
History
Fenethylline was first synthesized by the German pharmaceutical firm Degussa AG in 1961 and used for around 25 years as a milder alternative to amphetamine and related compounds. Although there are no FDA-approved indications for fenethylline, it was used in the treatment of "hyperkinetic children", in what would now be called attention deficit hyperactivity disorder, and, less commonly, for narcolepsy and depression. One of the main advantages of fenethylline was that it does not increase blood pressure to the same extent as an equivalent dose of amphetamine and so could be used in patients with cardiovascular conditions.
Fenethylline was considered to have fewer side effects and less potential for abuse than amphetamine. However, because its chemical composition is similar to amphetamine's, fenethylline was listed in 1981 as a schedule I controlled substance in the United States, and it became illegal in most countries in 1986 after being listed by the World Health Organization for international scheduling under the Convention on Psychotropic Substances, even though the actual incidence of fenethylline abuse was quite low.
Pharmacology
The fenethylline molecule results when theophylline is covalently linked with amphetamine by an alkyl chain.
Fenethylline is metabolized by the body to form two drugs, amphetamine (24.5% of oral dose) and theophylline (13.7% of oral dose), both of which are active stimulants. The physiological effects of fenethylline therefore seem to result from a combination of these two compounds, although it is not entirely clear how, and seems to involve a synergistic effect between amphetamine and theophylline produced following metabolism. The pharmacological actions of fenethylline before cleavage also remain poorly established, though it appears to act directly at several serotonin receptors.
Abuse and illegal trade
Abuse of fenethylline using the former brand name Captagon is common in the Middle East, and counterfeit versions of the drug continue to be available despite its illegality. Fenethylline is much less common outside of the Middle East, to the point that police may not recognize the drug. Fenethylline production and export were a significant industry sponsored by Bashar Al-Assad's government, with revenue from its exports contributing more than 90% of its foreign currency. After the fall of Al-Assad's government in Syria, Captagon trade fell by around 90%. The Assad regime's annual fenethylline revenues were estimated to have been worth US$57 billion in 2022, about three times the total trade of the entire Mexican illicit drug market.
Many of these counterfeit "Captagon" tablets contain other amphetamine derivatives that are easier to produce, but are pressed and stamped to look like Captagon pills. Some counterfeit Captagon pills analysed do contain fenethylline, indicating that illicit production of the drug continues to take place. These illicit pills often contain "a mix of amphetamines, caffeine and various fillers", which are sometimes referred to as "captagon" (with a lowercase "c").
Fenethylline is a popular drug in Western Asia, and American media outlet CNN reported in 2015 that it is allegedly used by militant groups in Syria. Later research demonstrated that it was the former Syrian government of Bashar al-Assad that has been financing production and sponsoring networks of its drug dealers in coordination with the former Assad regime aligned Syrian intelligence. It is manufactured locally by a cheap and simple process. In July 2019 in Lebanon, captagon was sold for $1.50 to $2.00 a pill. In 2021 in Syria, low-quality pills were sold locally for less than $1, while high-quality pills are increasingly smuggled abroad and may cost upwards of $14 each in Saudi Arabia.
According to some leaks, militant groups export the drug in exchange for weapons and cash. According to Abdelelah Mohammed Al-Sharif, secretary general of the National Committee for Narcotics Control and assistant director of Anti-Drug and Preventative Affairs, forty percent of users between the ages of twelve and twenty-two in Saudi Arabia are addicted to fenethylline. In 2017, fenethylline was the most popular recreational drug in the Arabian Peninsula.
In October 2015, a member of the Saudi royal family, Prince Abdel Mohsen Bin Walid Bin Abdulaziz, and four others were detained in Beirut on charges of drug trafficking after airport security discovered two tons of fenethylline pills and some cocaine on a private jet scheduled to depart for Riyadh, the Saudi capital. The following month, Agence France-Presse reported that Turkish authorities had seized two tonnes of fenethylline—about eleven million pills—during raids in the Hatay region on the Syrian border. The pills had been produced in Syria and were being shipped to countries in the Arab states of the Persian Gulf.
In December 2015, the Lebanese Army announced that it had discovered two large-scale drug production workshops in the north of the country and seized large quantities of fenethylline pills. Two days earlier, three tons of fenethylline and hashish were seized at Beirut Airport, concealed in school desks being exported to Egypt.
Traces of the drug were found on a mobile phone used by Mohamed Lahouaiej Bouhlel, a French-Tunisian who killed eighty-four civilians in Nice on Bastille Day 2016.
In May 2017, French customs at Charles de Gaulle Airport seized 750,000 fenethylline pills being transported from Lebanon to Saudi Arabia. In 2017, two other consignments of pills were found at Charles de Gaulle Airport: in January, heading for the Czech Republic, and in February, hidden in steel moulds. Further investigation showed that the seized products mainly contained a mixture of amphetamine and theophylline.
In January 2018, Saudi Arabia seized 1.3 million fenethylline pills at the Al-Haditha crossing near the border with Jordan. In December 2018, Greece intercepted a Syrian ship sailing for Libya, carrying six tonnes of processed cannabis and three million fenethylline pills. In July 2019, a shipment of 33 million fenethylline pills, weighing 5.25 tonnes, was seized in Greece coming from Syria. In July 2019, 800,000 fenethylline pills were found on a boat in the United Arab Emirates. In August 2019, Saudi customs at Al-Haditha seized over 2.5 million fenethylline pills found inside a truck and a private vehicle.
In February 2020, the UAE found 35 million fenethylline pills in a shipment of electric cables from Syria to Jebel Ali. In April 2020, Saudi Arabia seized 44.7 million fenethylline pills smuggled from Syria, and citing drug smuggling concerns, imposed an import ban on fruits and vegetables from Lebanon, causing the price of Lebanese lettuce to plummet. On 1 July 2020, an anti-drug operation coordinated in Italy by the Italian Guardia di Finanza and Customs and Monopolies Agency seized fourteen tonnes of amphetamines, labeled as Captagon, smuggled from Syria and initially thought by the Italian authorities to have been produced by ISIS, which were found in three shipping containers filled with around 84 million pills, in the southern port of Salerno.
In November 2020, Egypt seized two shipments of fenethylline pills at Damietta port coming from Syria. The first had over 3.2 million tablets, while the second contained 11 million. In December 2020, Italian authorities seized about 14 tonnes of fenethylline arriving from Latakia, Syria, and heading towards Libya, consisting of about 85 million pills, worth around $1 billion.
In January 2021, Egyptian authorities seized eight tons of fenethylline and another eight tons of hashish at Port Said, from a shipment that arrived from Lebanon. In February 2021, Lebanese customs seized at Beirut port a shipment of 5 million fenethylline pills hidden in a tile-making machine, intended for Greece and Saudi Arabia. In April 2021, Saudi authorities discovered 5.3 million fenethylline pills hidden in fruits imported from Lebanon.
Production in Syria
The drug has played a role in the Syrian civil war. The production and sale of fenethylline generates large revenues which are likely used to fund the purchase of weapons, and fenethylline is used as a stimulant by combatants. Poverty and international sanctions that limit legal exports are contributing factors.
In May 2021, The Guardian described the effects of fenethylline production in Syria on the economy as "a dirty business that is creating a near-narco-state". Drug money flowing into Syria is destabilizing legitimate businesses, positioning it as the global centre of fenethylline production, with increased industrialization, adaptation, and technical sophistication. In June 2021, Saudi authorities at Jeddah port seized 14 million fenethylline tablets hidden inside a shipment of iron plates coming from Lebanon. In the same month, Saudi authorities seized a shipment of 4.5 million fenethylline pills, smuggled inside several orange cartons, at Jeddah port. In July 2021, Saudi customs discovered 2.1 million fenethylline pills at Al-Haditha hidden in a tomato paste shipment. In December 2024, just after the government fell, Syrian former-rebels found warehouses filled with Captagon alongside factory equipment to make it and also found some Captagon pills inside the copper coils of new voltage stabilisers, showing one way the former Syrian government used to smuggle Captagon out of the country. The Captagon was all destroyed by the new government; "Khattab", a pseudonym of one of the former rebels said "We destroyed and burned it because it's harmful to people. It harms nature and people and humans."
The New York Times reported in December 2021 that the Syrian Army's elite 4th Armoured Division, commanded by Maher al-Assad, the brother of Syrian President Bashar al-Assad, oversaw much of the production and distribution of fenethylline, among other drugs. The unit controlled manufacturing facilities, packing plants, and smuggling networks all across Syria, and had started to deal in crystal meth. The division's security bureau, headed by Maj. Gen. Ghassan Bilal, provided protection for factories and along smuggling routes to the port city Latakia and to border crossings with Jordan and Lebanon. Jihad Yazigi, editor of The Syria Report, reported that fenethylline had "probably become Syria's most important source of foreign currency."
Military use
Fenethylline is a major stimulant, sometimes dubbed the "jihad drug", used by some jihadist fighters. It quickly produces a euphoric intensity in users, allowing them to stay awake for a very long time, remaining more calm and focused under the effects of the drug, which allows the senses to stay at more operational levels. It also helps to subdue feelings of fear and hunger during lengthy operations. Psychiatrist Robert Keisling said that the drug "gives you a sense of well-being and euphoria", along with the thought that "you're invincible and that nothing can harm you." Those who go on jihad missions take high doses to prepare, says a former fighter associated with the Muslim Brotherhood. He described the effect: "They go blank. Their heart rate spikes. They lose all connection to their emotions and thoughts." Some commented on this effect as a "zombie-like detachment".
An illegal Syrian manufacturer told New York Magazine in 2015 of the effect the drug had on fighters: "[If] someone takes many pills, like 30 or so, they become violent and crazy, paranoid, unafraid of anything. They'll have a thirst for fighting and killing and will shoot at whatever they see. They lose any feeling or empathy for the people in front of them and can kill them without caring at all."
According to some commentators, fighters taking the drug in Syria were better able to tolerate the pain of being shot. A drug control officer in the central city of Homs told Reuters that protestors and fighters were able to resist painful interrogations better while on fenethylline. Former fighters have told the media that the pills helped them overcome their fear. Doctors report that the drug has dangerous side effects, including psychosis and brain damage. According to former fighters, hundreds became addicted to the pills they were given by brigade leaders without knowing what they were taking.
Fenethylline use was associated with the rise of jihadist group ISIS. One 19-year-old fighter named Kareem, who said he fought alongside ISIS for more than a year, told CNN in 2014: "They gave us drugs, hallucinogenic pills that would make you go to battle not caring if you live or die."
In February 2023, Israel's Ministry of Defense claimed to have thwarted an attempt to smuggle thousands of fenethylline tablets from the West Bank into the Gaza Strip. Hamas claimed it had seized 50,000 fenethylline pills on the border, and claimed Israel was attempting to dope Gaza.
Israel has publicly stated that fenethylline was used during the October 7 attacks, but this has been doubted by experts. Israeli forces said they had found fenethylline-containing tablets, powder, and liquid on the bodies of the attackers. But Caroline Rose of Newline Institute said that she had never seen fenethylline made in liquid form. While precursor chemicals for fenethylline tend to be in powdered form, fenethylline itself is not commonly a powder. She concluded "I find it somewhat difficult to believe that, in a single raid, we find two new forms of Captagon." Videos compiled by the Israeli government of the Hamas attack—cobbled together from cell phones, GoPros, as well as car and surveillance cameras—allege that at least some of the militants were under the influence of the drug. Caroline Rose also noted that 'Fighters might have taken cocaine right before, or captagon, or no substances at all. Some might have taken caffeine, some may be sleep deprived...but there’s no way that captagon was a factor to blame in the violence and atrocities that we witnessed on October 7'.
Fenethylline was reportedly used by the ISIS attackers in the Crocus City Hall attack in Russia in 2024.
Synthesis
According to reviewers Pergolizzi Jr., et al., writing in 2024, the clandestine chemical synthesis of fenethylline is "straightforward and inexpensive".
Small-scale synthesis in academic laboratories is equally straightforward:
The overall transformation is accomplished in two laboratory steps, each requiring extraction and purification. In the first step, theophylline (1) is alkylated in a substitution reaction using 1-bromo-2-chloroethane (2) to give 7-(β-chloroethyl)theophylline (Benaphyllin, Eupnophile; 3). In the second step, the primary amine in amphetamine (4) displaces the terminal halide in 3 to give fenethylline (5).
The synthesis can also be performed with analogous reagents and solvents. Use of tetradeutero-vicdichloroethane instead of vicchlorobromoethane yields reasonably a perdeuterated-bridge analogue.
Identification in biological samples
A gas chromatography-mass spectrometry (GC-MS) method for the determination of fenethylline and related substances in plasma, urine, and hair has been developed, suggesting that hair testing can be useful for determining a drug history of fenethylline, and discrimination between fenethylline and its precursor, amphetamine.
See also
Amfecloral
Cafedrine
Famprofazone
Fencamine
Theodrenaline
Theophylline ephedrine
ZDCM-04
References
Further reading
Syrian civil war
Substituted amphetamines
Xanthines
Codrugs
Adenosine receptor antagonists
Norepinephrine-dopamine releasing agents
Stimulants
1961 introductions
Illegal drug trade
Wakefulness-promoting agents | Fenethylline | [
"Chemistry"
] | 3,680 | [
"Alkaloids by chemical classification",
"Xanthines"
] |
5,636,920 | https://en.wikipedia.org/wiki/Bjarne%20Tromborg | Bjarne Tromborg (born 1940) is a Danish physicist, best known for his work in particle physics and photonics.
Biography
Tromborg was born in Give, Denmark. In 1968, he received the M.Sc. degree in physics and mathematics from the Niels Bohr Institute, in Copenhagen, Denmark. He was a university researcher studying high-energy particle physics from 1968 to 1978. In 1979, he joined the research laboratory of the Danish Teleadministrations in Copenhagen. He was Head of Optical Communications Department at Tele Danmark Research, Horsholm, Denmark from 1987 to 1995.
He was an adjunct professor at the Niels Bohr Institute from 1991 to 2001. In 1997, he took a leave of absence at the Technion - Israel Institute of Technology. Until his retirement at the end June 2006, he was a research professor at COM•DTU, Department of Communications, Optics and Materials (which became DTU Fotonik, Department of Photonics Engineering, in 2008 and DTU Electro, Department of Electrical and Photonics Engineering, in 2022), Technical University of Denmark.
Research
Tromborg co-authored a research monograph and approximately one hundred journal and conference publications, mostly on physics and optoelectronics.
At the Niels Bohr Institute, he carried out research in elementary particle physics, particularly analytic S-matrix theory and electromagnetic corrections to hadron scattering. He coauthored a research monograph on dispersion theory.
In the early 1980s, he switched to photonics. Tromborg was one of the first to develop advanced theoretical models for complex semiconductor laser structures such as external laser cavities and distributed feedback lasers in the beginning of the 1980s. Computer simulations and measurements confirmed the validity of the theoretical models and their predictions. Several co-workers including Henning Olesen, Gunnar Jacobsen, Jens Henrik Osmundsen, Finn Mogensen, Kristian Stubkjær, Jesper Mørk, Xing Pan, Hans Erik Lassen and Björn Jónsson contributed to this work over a period of almost 15 years until 1995.
At TeleDanmark Research in the late 1980s and early 1990s Tromborg and colleagues worked to study the dynamics of active semiconductor materials in order to understand the physical relaxation processes at play, their strength and characteristic time scales. A pump-probe set-up employing femtosecond lasers was established and modeling efforts were initialized. Tromborg led the effort to identify this as a topic that would remain important for many years and argued that Denmark should work to lead in the field. He also proposed theoretical methods that could be used to estimate the size of these ultrafast dynamical effects and their role in understanding the origin of nonlinear gain suppression in semiconductor lasers.
From 1999 to his retirement at the end of June 2006, Tromborg was with the Department of Communications, Optics and Materials (COM*DTU) at the Technical University of Denmark. In this period, he worked in both research and education, as well as in securing several European Union research projects for COM*DTU. Tromborg took up the field of photonic crystals and initiated and contributed himself to activities within the theory of photonic crystals. He also applied general techniques within stochastic theory and signal analysis to develop improved descriptions of noise spectra in nonlinear semiconductor optical amplifiers.
Awards and recognition
Tromborg received the Electro-prize from the Danish Society of Engineers in 1981.
He was Chairman of the Danish Optical Society from 1999 to 2002.
He has been Associate Editor of the IEEE Journal of Quantum Electronics since 2003.
At his retirement, a symposium on photonics was held in his honor on 22 June 2006 at the Technical University of Denmark.
References
External links
Publications
21st-century Danish physicists
Particle physicists
1940 births
Living people
People from Vejle Municipality | Bjarne Tromborg | [
"Physics"
] | 781 | [
"Particle physicists",
"Particle physics"
] |
5,637,003 | https://en.wikipedia.org/wiki/Generalised%20compound | A generalized compound is a mixture of chemical compounds of constant composition, despite possible changes in the total amount. The concept is used in the Dynamic Energy Budget theory, where biomass is partitioned into a limited set of generalised compounds, which contain a high percentage of organic compounds. The amount of generalized compound can be quantified in terms of weight, but more conveniently in terms of C-moles. The concept of strong homeostasis has an intimate relationship with that of generalised compound.
References
Metabolism | Generalised compound | [
"Chemistry",
"Biology"
] | 104 | [
"Biochemistry",
"Metabolism",
"Cellular processes"
] |
5,637,355 | https://en.wikipedia.org/wiki/Neural%20facilitation | Neural facilitation, also known as paired-pulse facilitation (PPF), is a phenomenon in neuroscience in which postsynaptic potentials (PSPs) (EPPs, EPSPs or IPSPs) evoked by an impulse are increased when that impulse closely follows a prior impulse. PPF is thus a form of short-term synaptic plasticity. The mechanisms underlying neural facilitation are exclusively pre-synaptic; broadly speaking, PPF arises due to increased presynaptic concentration leading to a greater release of neurotransmitter-containing synaptic vesicles. Neural facilitation may be involved in several neuronal tasks, including simple learning, information processing, and sound-source localization.
Mechanisms
Overview
plays a significant role in transmitting signals at chemical synapses. Voltage-gated channels are located within the presynaptic terminal. When an action potential invades the presynaptic membrane, these channels open and enters. A higher concentration of enables synaptic vesicles to fuse to the presynaptic membrane and release their contents (neurotransmitters) into the synaptic cleft to ultimately contact receptors in the postsynaptic membrane. The amount of neurotransmitter released is correlated with the amount of influx. Therefore, short-term facilitation (STF) results from a build up of within the presynaptic terminal when action potentials propagate close together in time.
Facilitation of excitatory post-synaptic current (EPSC) can be quantified as a ratio of subsequent EPSC strengths. Each EPSC is triggered by pre-synaptic calcium concentrations and can be approximated by:
EPSC = k([]presynaptic)4 = k([]rest + []influx + []residual)4
Where k is a constant.
Facilitation = EPSC2 / EPSC1 = (1 + []residual / []influx)4 - 1
Experimental evidence
Early experiments by Del Castillo & Katz in 1954 and Dudel & Kuffler in 1968 showed that facilitation was possible at the neuromuscular junction even if transmitter release does not occur, indicating that facilitation is an exclusively presynaptic phenomenon.
Katz and Miledi proposed the residual hypothesis. They attributed the increase in neurotransmitter release to residual or accumulated ("active calcium") within the axon membrane that remains attached to the membrane's inner surface. Katz and Miledi manipulated the concentration within the presynaptic membrane to determine whether or not residual remaining within the terminal after the first impulse caused an increase in neurotransmitter release following the second stimulus.
During the first nerve impulse, concentration was either significantly below or nearing that of the second impulse. When concentration was approaching that of the second impulse, facilitation was increased. In this first experiment, stimuli were presented in intervals of 100 ms between the first and second stimuli. An absolute refractory period was reached when intervals were about 10 ms apart.
To examine facilitation during shorter intervals, Katz and Miledi directly applied brief depolarizing stimuli to nerve endings. When increasing the depolarizing stimulus from 1-2 ms, neurotransmitter release greatly increased due to accumulation of active . Therefore, the degree of facilitation depends on the amount of active , which is determined by the reduction in conductance over time as well as the amount of removed from axon terminals after the first stimulus. Facilitation is greatest when the impulses are closest together because conductance would not return to baseline prior to the second stimulus. Therefore, both conductance and accumulated would be greater for the second impulse when presented shortly after the first.
In the Calyx of Held synapse, short term facilitation (STF) has been shown to result from the binding of residual to neuronal sensor 1 (NCS1). Conversely, STF has been shown to decrease when chelators are added to the synapse (causing chelation) which reduce residual . Therefore, "active " plays a significant role in neural facilitation.
In the synapse between Purkinje cells, short-term facilitation has been shown to be entirely mediated by the facilitation of currents through the voltage-dependent calcium channels.
Relation to other forms of short-term synaptic plasticity
Augmentation and potentiation
Short-term synaptic enhancement is often differentiated into categories of facilitation, augmentation, and potentiation (also referred to as post-tetanic potentiation or PTP). These three processes are often differentiated by their time scales: facilitation usually lasts for tens of milliseconds, while augmentation acts on a time scale on the order of seconds and potentiation has a time course of tens of seconds to minutes. All three effects increase the probability of neurotransmitter release from the presynaptic membrane, but the underlying mechanism is different for each. Paired-pulse facilitation is caused by the presence of residual , augmentation likely arises due to increased action of the presynaptic protein munc-13, and post-tetanic potentiation is mediated by presynaptic activation of protein kinases. The type of synaptic enhancement seen in a given cell is also related to variant dynamics of removal, which is in turn dependent upon the type of stimuli; a single action potential leads to facilitation, while a short tetanus generally causes augmentation and a longer tetanus leads to potentiation.
Short-term depression (STD)
Short-term depression (STD) operates in the opposite direction of facilitation, decreasing the amplitude of PSPs. STD occurs due to a decrease in the readily releasable pool of vesicles (RRP) as a result of frequent stimulation. The inactivation of presynaptic channels after repeated action potentials also contributes to STD. Depression and facilitation interact to create short-term plastic changes within neurons, and this interaction is called the dual-process theory of plasticity. Basic models present these effects as additive, with the sum creating the net plastic change (facilitation - depression = net change). However, it has been shown that depression occurs earlier on in the stimulus-response pathway than facilitation, and therefore plays into the expression of facilitation. Many synapses exhibit properties of both facilitation and depression. In general, however, synapses with low initial probability of vesicle release are more likely to exhibit facilitation, and synapses with high probability of initial vesicle release are more likely to exhibit depression.
Relation to information transmission
Synaptic filtering
Because the probability of vesicle release is activity-dependent, synapses can act as dynamic filters for information transmission. Synapses with a low initial probability of vesicle release act as high-pass filters: because the release probability is low, a higher-frequency signal is needed to trigger release, and the synapse thus selectively responds to high-frequency signals. Likewise, synapses with high initial release probabilities serve as low-pass filters, responding to lower-frequency signals. Synapses with an intermediate probability of release act as band-pass filters that selectively respond to a specific range of frequencies. These filtering characteristics may be affected by a variety of factors, including both PPD and PPF, as well as chemical neuromodulators. In particular, because synapses with low release probabilities are more likely to experience facilitation than depression, high-pass filters are often converted to band-pass filters. Likewise, because synapses with high initial release probabilities are more likely to undergo depression than facilitation, it is common for low-pass filters to become band-pass filters, as well. Neuromodulators, meanwhile, may affect these short-term plasticities. In synapses with intermediate release probabilities, properties of the individual synapse will determine how the synapse changes in response to stimuli. These changes in filtration affect information transmission and encoding in response to repeated stimuli.
Sound-source localization
In humans, sound localization is primarily accomplished using information about how the intensity and timing of a sound vary between each ear. Neuronal computations involving these interaurual intensity differences (IIDs) and interaural time differences (ITDs) are typically carried out in different pathways in the brain. Short-term plasticity likely assists in differentiating between these two pathways: short-term facilitation dominates in intensity pathways, while short-term depression dominates in temporal pathways. These different types of short-term plasticity allow for different kinds of information filtration, thus contributing to the division of the two kinds of information into distinct processing streams.
The filtering capabilities of short-term plasticity may also assist with encoding information related to amplitude modulation (AM). Short-term depression can dynamically adjust the gain on high-frequency inputs, and may thus allow for an expanded high-frequency range for AM. A mixture of facilitation and depression may also assist in AM coding by leading to rate filtering.
See also
Long-term potentiation
Synaptic plasticity
Neuroplasticity
Post-tetanic potentiation
Sensitization
Synaptic augmentation
References
Further reading
.
Neuroscience
Neurophysiology | Neural facilitation | [
"Biology"
] | 2,005 | [
"Neuroscience"
] |
5,637,406 | https://en.wikipedia.org/wiki/List%20of%20adjectivals%20and%20demonyms%20of%20astronomical%20bodies | The adjectival forms of the names of astronomical bodies are not always easily predictable. Attested adjectival forms of the larger bodies are listed below, along with the two small Martian moons; in some cases they are accompanied by their demonymic equivalents, which denote hypothetical inhabitants of these bodies.
For Classical (Greco-Roman) names, the adjectival and demonym forms normally derive from the oblique stem, which may differ from the nominative form used in English for the noun form. For instance, for a large portion of names ending in -s, the oblique stem and therefore the English adjective changes the -s to a -d, -t, or -r, as in Mars–Martian, Pallas–Palladian and Ceres–Cererian;
occasionally an -n has been lost historically from the nominative form, and reappears in the oblique and therefore in the English adjective, as in Pluto–Plutonian and Atlas–Atlantean.
Many of the more recent or more obscure names are only attested in mythological or literary contexts, rather than in specifically astronomical contexts. Forms ending in -ish or -ine, such as "Puckish", are not included below if a derivation in -an is also attested. Rare forms, or forms only attested with spellings not in keeping with the IAU-approved spelling (such as c for k), are shown in italics.
Note on pronunciation
The suffix -ian is always unstressed: that is, . The related ending -ean, from an e in the root plus a suffix -an, has traditionally been stressed (that is, ) if the e is long ē in Latin (or is from ē in Greek); but if the e is short in Latin, the suffix is pronounced the same as -ian. In practice forms ending in -ean may be pronounced as if they were spelled -ian even if the e is long in Latin. This dichotomy should be familiar from the dual pronunciations of Caribbean as and .
Generic bodies
Constellations
Derivative forms of constellations are used primarily for meteor showers. The genitive forms of the constellations are used to name stars. (See List of constellations.) Other adjectival forms are less common.
Sun
Planets
Moons
Galaxies
See also
Demonym
Notes
References
External links
Wordorigins.org: Naming The Planets, Part I
Astronomical nomenclature
Lists of astronomical objects
Lists of place names
Lists of demonyms | List of adjectivals and demonyms of astronomical bodies | [
"Astronomy"
] | 510 | [
"Astronomy-related lists",
"Astronomical nomenclature",
"Astronomical objects",
"Lists of astronomical objects"
] |
5,637,418 | https://en.wikipedia.org/wiki/Chimera%20%28virus%29 | A chimera or chimeric virus is a virus that contains genetic material derived from two or more distinct viruses. It is defined by the Center for Veterinary Biologics (part of the U.S. Department of Agriculture's Animal and Plant Health Inspection Service) as a "new hybrid microorganism created by joining nucleic acid fragments from two or more different microorganisms in which each of at least two of the fragments contain essential genes necessary for replication." The term genetic chimera had already been defined to mean: an individual organism whose body contained cell populations from different zygotes or an organism that developed from portions of different embryos. Chimeric flaviviruses have been created in an attempt to make novel live attenuated vaccines.
Etymology
In mythology, a chimera is a creature such as a hippogriff or a gryphon formed from parts of different animals, thus the name for these viruses.
As a natural phenomenon
Viruses are categorized in two types: In prokaryotes, the great majority of viruses possess double-stranded (ds) DNA genomes, with a substantial minority of single-stranded (ss) DNA viruses and only limited presence of RNA viruses. In contrast, in eukaryotes, RNA viruses account for the majority of the virome diversity although ssDNA and dsDNA viruses are common as well.
In 2012, the first example of a naturally-occurring RNA-DNA hybrid virus was unexpectedly discovered during a metagenomic study of the acidic extreme environment of Boiling Springs Lake that is in Lassen Volcanic National Park, California. The virus was named BSL-RDHV (Boiling Springs Lake RNA DNA Hybrid Virus). Its genome is related to a DNA circovirus, which usually infect birds and pigs, and a RNA tombusvirus, which infect plants. The study surprised scientists, because DNA and RNA viruses vary and the way the chimera came together was not understood.
Other viral chimeras have also been found, and the group is known as the CHIV viruses ("chimeric viruses").
As a bioweapon
Combining two pathogenic viruses increases the lethality of the new virus which is why there have been cases where chimeric viruses have been considered for use as a bioweapon. For example, the Soviet Union's Chimera Project attempted in the late 1980s and early 1990s to combine DNA from Venezuelan equine encephalitis virus and Smallpox virus at one location, and Ebola virus and Smallpox virus in another location, even in the face of Boris Yeltsin's decree of 11 April 1992.
A combination Smallpox virus and Monkeypox virus has also been studied.
As a medical treatment
Studies have shown that chimeric viruses can also be developed to have medical benefits. The US Food and Drug Administration (FDA) has recently approved the use of chimeric antigen receptor (CAR) to treat relapsed non-Hodgkin Lymphoma. By introducing a chimeric antigen receptor into T cells, the T cells become more efficient at identifying and attacking the tumor cells. Studies are also in progress to create a chimeric vaccine against four types of Dengue virus, however this has not been successful yet.
References
Viruses
Chimerism
Hybrid organisms | Chimera (virus) | [
"Biology"
] | 674 | [
"Viruses",
"Behavior",
"Tree of life (biology)",
"Hybrid organisms",
"Chimerism",
"Reproduction",
"Microorganisms"
] |
5,637,425 | https://en.wikipedia.org/wiki/Catapult%20effect | In electromagnetics, the catapult description of magnetic forces refers to when a current is passed through a loose wire in a magnetic field. The loose wire is then catapulted horizontally away from the magnetic field. This occurs due to the Lorentz force acting on the electric current in the wire due to the magnetic field.
Implications of the catapult effect on science
The idea of the catapult effect is central in our day-to-day lives as it greatly contributes to our understanding of the electric motor (which we use in numerous appliances from washing machines to vacuum cleaners and cars). The catapult effect helps to explain the movement of the motor itself and is thus used widely in science.
The left hand rule
The left-hand rule helps to explain why the loose wire moves as it does in the catapult effect. The left hand rule naturally takes its name from the left hand anemyl the thumb and the next two fingers. If you arrange the fingers in a three-dimensional shape so the first finger and thumb are perpendicular to one another and the second finger is perpendicular to the first aiming downwards then this is the way magnetic fields with addition of flowing current will act. The thumb represents the direction of motion. The first finger represents the direction of the magnetic field while the second finger represents the direction of the current. Therefore, as long as you know the direction of one of these three variables you will be able to predict the other two using the left hand rule. This is used in electric motors.
References
Electromagnetism | Catapult effect | [
"Physics"
] | 315 | [
"Electromagnetism",
"Physical phenomena",
"Fundamental interactions"
] |
5,637,517 | https://en.wikipedia.org/wiki/Toxicant | A toxicant is any toxic substance, whether artificial or naturally occurring. By contrast, a toxin is a poison produced naturally by an organism (e.g. plant, animal, insect, bacterium). The different types of toxicants can be found in the air, soil, water, or food.
Occurrence
Toxicants can be found in the air, soil, water, or food. Humans can be exposed to environmental toxicants. Fish can contain environmental toxicants. Tobacco smoke contains toxicants. E-cigarette aerosol also contains toxicants. The emissions of a heat-not-burn tobacco product contains toxicants. Most heavy metals are toxicants. Diesel exhaust contains toxicants. Pesticides, benzene, and asbestos-like fibers such as carbon nanotubes are toxicants. Possible developmental toxicants include phthalates, phenols, sunscreens, pesticides, halogenated flame retardants, perfluoroalkyl coatings, nanoparticles, e-cigarettes, and dietary polyphenols.
Related terms
By contrast, a toxin is a poison produced naturally by an organism (e.g. plant, animal, insect). The 2011 book A Textbook of Modern Toxicology states, "A toxin is a toxicant that is produced by a living organism and is not used as a synonym for toxicant—all toxins are toxicants, but not all toxicants are toxins. Toxins, whether produced by animals, plants, insects, or microbes are generally metabolic products that have evolved as defense mechanisms for the purpose of repelling or killing predators or pathogens."
Biocides are classified as oxidizing or non-oxidizing toxicants. Chlorine is the most commonly manufactured oxidizing toxicant. Chlorine is ubiquitously added to drinking water to disinfect it. Non-oxidizing toxicants include isothiazolinones and quaternary ammonium compounds.
An intoxicant is a substance that intoxicates such as an alcoholic drink. An intoxicant is a substance that impairs the mind and causes a person to be in a state varying from exhilaration to lethargy.
Health impacts
References
External links | Toxicant | [
"Physics",
"Environmental_science"
] | 460 | [
"Toxicology",
"Harmful chemical substances",
"Materials",
"Toxicants",
"Matter"
] |
5,637,655 | https://en.wikipedia.org/wiki/Lie%20coalgebra | In mathematics a Lie coalgebra is the dual structure to a Lie algebra.
In finite dimensions, these are dual objects: the dual vector space to a Lie algebra naturally has the structure of a Lie coalgebra, and conversely.
Definition
Let be a vector space over a field equipped with a linear mapping from to the exterior product of with itself. It is possible to extend uniquely to a graded derivation (this means that, for any which are homogeneous elements, ) of degree 1 on the exterior algebra of :
Then the pair is said to be a Lie coalgebra if , i.e., if the graded components of the exterior algebra with derivation form a cochain complex:
Relation to de Rham complex
Just as the exterior algebra (and tensor algebra) of vector fields on a manifold form a Lie algebra (over the base field ), the de Rham complex of differential forms on a manifold form a Lie coalgebra (over the base field ). Further, there is a pairing between vector fields and differential forms.
However, the situation is subtler: the Lie bracket is not linear over the algebra of smooth functions (the error is the Lie derivative), nor is the exterior derivative: (it is a derivation, not linear over functions): they are not tensors. They are not linear over functions, but they behave in a consistent way, which is not captured simply by the notion of Lie algebra and Lie coalgebra.
Further, in the de Rham complex, the derivation is not only defined for , but is also defined for .
The Lie algebra on the dual
A Lie algebra structure on a vector space is a map which is skew-symmetric, and satisfies the Jacobi identity. Equivalently, a map that satisfies the Jacobi identity.
Dually, a Lie coalgebra structure on a vector space E is a linear map which is antisymmetric (this means that it satisfies , where is the canonical flip ) and satisfies the so-called cocycle condition (also known as the co-Leibniz rule)
.
Due to the antisymmetry condition, the map can be also written as a map .
The dual of the Lie bracket of a Lie algebra yields a map (the cocommutator)
where the isomorphism holds in finite dimension; dually for the dual of Lie comultiplication. In this context, the Jacobi identity corresponds to the cocycle condition.
More explicitly, let be a Lie coalgebra over a field of characteristic neither 2 nor 3. The dual space carries the structure of a bracket defined by
, for all and .
We show that this endows with a Lie bracket. It suffices to check the Jacobi identity. For any and ,
where the latter step follows from the standard identification of the dual of a wedge product with the wedge product of the duals. Finally, this gives
Since , it follows that
, for any , , , and .
Thus, by the double-duality isomorphism (more precisely, by the double-duality monomorphism, since the vector space needs not be finite-dimensional), the Jacobi identity is satisfied.
In particular, note that this proof demonstrates that the cocycle condition is in a sense dual to the Jacobi identity.
References
Coalgebras
Lie algebras | Lie coalgebra | [
"Mathematics"
] | 682 | [
"Mathematical structures",
"Algebraic structures",
"Coalgebras"
] |
5,638,081 | https://en.wikipedia.org/wiki/Sage%20oil | Sage oils are essential oils that come in several varieties:
Dalmatian sage oil
Also called English, Garden, and True sage oil. Made by steam distillation of Salvia officinalis partially dried leaves. Yields range from 0.5 to 1.0%. A colorless to yellow liquid with a warm camphoraceous, thujone-like odor and sharp and bitter taste. The main components of the oil are thujone (50%), camphor, pinene, and cineol.
Clary sage oil
Sometimes called muscatel. Made by steam or water distillation of Salvia sclarea flowering tops and foliage. Yields range from 0.7 to 1.5%. A pale yellow to yellow liquid with a herbaceous odor and a winelike bouquet. Produced in large quantities in France, Russia and Morocco. The oil contains linalyl acetate, linalool and other terpene alcohols (sclareol), as well as their acetates.
Spanish sage oil
Made by steam distillation of the leaves and twigs of S. officinalis subsp. lavandulifolia (syn. S. lavandulifolia). A colorless to pale yellow liquid with the characteristic camphoraceous odor. Unlike Dalmatian sage oil, Spanish sage oil contains no or only traces of thujone; camphor and eucalyptol are the major components.
Greek sage oil
Made by steam distillation of Salvia triloba leaves. Grows in Greece and Turkey. Yields range from 0.25% to 4%. The oil contains camphor, thujone, and pinene, the dominant component being eucalyptol.
Judaean sage oil
Made by steam distillation of Salvia judaica leaves. The oil contains mainly cubebene and ledol.
References
Essential oils
Flavors | Sage oil | [
"Chemistry"
] | 391 | [
"Essential oils",
"Natural products"
] |
5,638,621 | https://en.wikipedia.org/wiki/DNA%20footprinting | DNA footprinting is a method of investigating the sequence specificity of DNA-binding proteins in vitro. This technique can be used to study protein-DNA interactions both outside and within cells.
The regulation of transcription has been studied extensively, and yet there is still much that is unknown. Transcription factors and associated proteins that bind promoters, enhancers, or silencers to drive or repress transcription are fundamental to understanding the unique regulation of individual genes within the genome. Techniques like DNA footprinting help elucidate which proteins bind to these associated regions of DNA and unravel the complexities of transcriptional control.
History
In 1978, David J. Galas and Albert Schmitz developed the DNA footprinting technique to study the binding specificity of the lac repressor protein. It was originally a modification of the Maxam-Gilbert chemical sequencing technique.
Method
The simplest application of this technique is to assess whether a given protein binds to a region of interest within a DNA molecule. Polymerase chain reaction (PCR) amplify and label region of interest that contains a potential protein-binding site, ideally amplicon is between 50 and 200 base pairs in length. Add protein of interest to a portion of the labeled template DNA; a portion should remain separate without protein, for later comparison. Add a cleavage agent to both portions of DNA template. The cleavage agent is a chemical or enzyme that will cut at random locations in a sequence independent manner. The reaction should occur just long enough to cut each DNA molecule in only one location. A protein that specifically binds a region within the DNA template will protect the DNA it is bound to from the cleavage agent. Run both samples side by side on a polyacrylamide gel electrophoresis. The portion of DNA template without protein will be cut at random locations, and thus when it is run on a gel, will produce a ladder-like distribution. The DNA template with the protein will result in ladder distribution with a break in it, the "footprint", where the DNA has been protected from the cleavage agent.
Note: Maxam-Gilbert chemical DNA sequencing can be run alongside the samples on the polyacrylamide gel to allow the prediction of the exact location of ligand binding site.
Labeling
The DNA template labeled at the 3' or 5' end, depending on the location of the binding site(s). Labels that can be used are: radioactivity and fluorescence. Radioactivity has been traditionally used to label DNA fragments for footprinting analysis, as the method was originally developed from the Maxam-Gilbert chemical sequencing technique. Radioactive labeling is very sensitive and is optimal for visualizing small amounts of DNA. Fluorescence is a desirable advancement due to the hazards of using radio-chemicals. However, it has been more difficult to optimize because it is not always sensitive enough to detect the low concentrations of the target DNA strands used in DNA footprinting experiments. Electrophoretic sequencing gels or capillary electrophoresis have been successful in analyzing footprinting of fluorescent tagged fragments.
Cleavage agent
A variety of cleavage agents can be chosen. a desirable agent is one that is sequence neutral, easy to use, and is easy to control. Unfortunately no available agents meet all of these standards, so an appropriate agent can be chosen, depending on your DNA sequence and ligand of interest. The following cleavage agents are described in detail:
DNase I is a large protein that functions as a double-strand endonuclease. It binds the minor groove of DNA and cleaves the phosphodiester backbone. It is a good cleavage agent for footprinting because its size makes it easily physically hindered. Thus is more likely to have its action blocked by a bound protein on a DNA sequence. In addition, the DNase I enzyme is easily controlled by adding EDTA to stop the reaction. There are however some limitations in using DNase I. The enzyme does not cut DNA randomly; its activity is affected by local DNA structure and sequence and therefore results in an uneven ladder. This can limit the precision of predicting a protein's binding site on the DNA molecule.
Hydroxyl radicals are created from the Fenton reaction, which involves reducing Fe2+ with H2O2 to form free hydroxyl molecules. These hydroxyl molecules react with the DNA backbone, resulting in a break. Due to their small size, the resulting DNA footprint has high resolution. Unlike DNase I they have no sequence dependence and result in a much more evenly distributed ladder. The negative aspect of using hydroxyl radicals is that they are more time-consuming to use, due to a slower reaction and digestion time.
Ultraviolet irradiation can be used to excite nucleic acids and create photoreactions, which results in damaged bases in the DNA strand. Photoreactions can include: single strand breaks, interactions between or within DNA strands, reactions with solvents, or crosslinks with proteins. The workflow for this method has an additional step, once both your protected and unprotected DNA have been treated, there is subsequent primer extension of the cleaved products. The extension will terminate upon reaching a damaged base, and thus when the PCR products are run side-by-side on a gel; the protected sample will show an additional band where the DNA was crosslinked with a bound protein. Advantages of using UV are that it reacts very quickly and can therefore capture interactions that are only momentary. Additionally it can be applied to in vivo experiments, because UV can penetrate cell membranes. A disadvantage is that the gel can be difficult to interpret, as the bound protein does not protect the DNA, it merely alters the photoreactions in the vicinity.
Advanced applications
In vivo footprinting
In vivo footprinting is a technique used to analyze the protein-DNA interactions that are occurring in a cell at a given time point. DNase I can be used as a cleavage agent if the cellular membrane has been permeabilized. However the most common cleavage agent used is UV irradiation because it penetrates the cell membrane without disrupting cell state and can thus capture interactions that are sensitive to cellular changes. Once the DNA has been cleaved or damaged by UV, the cells can be lysed and DNA purified for analysis of a region of interest. Ligation-mediated PCR is an alternative method to footprint in vivo. Once a cleavage agent has been used on the genomic DNA, resulting in single strand breaks, and the DNA is isolated, a linker is added onto the break points. A region of interest is amplified between the linker and a gene-specific primer, and when run on a polyacrylamide gel, will have a footprint where a protein was bound. In vivo footprinting combined with immunoprecipitation can be used to assess protein specificity at many locations throughout the genome. The DNA bound to a protein of interest can be immunoprecipitated with an antibody to that protein, and then specific region binding can be assessed using the DNA footprinting technique.
Quantitative footprinting
The DNA footprinting technique can be modified to assess the binding strength of a protein to a region of DNA. Using varying concentrations of the protein for the footprinting experiment, the appearance of the footprint can be observed as the concentrations increase and the proteins binding affinity can then be estimated.
Detection by capillary electrophoresis
To adapt the footprinting technique to updated detection methods, the labelled DNA fragments are detected by a capillary electrophoresis device instead of being run on a polyacrylamide gel. If the DNA fragment to be analyzed is produced by polymerase chain reaction (PCR), it is straightforward to couple a fluorescent molecule such as carboxyfluorescein (FAM) to the primers. This way, the fragments produced by DNaseI digestion will contain FAM, and will be detectable by the capillary electrophoresis machine. Typically, carboxytetramethyl-rhodamine (ROX)-labelled size standards are also added to the mixture of fragments to be analyzed. Binding sites of transcription factors have been successfully identified this way.
Genome-wide assays
Next-generation sequencing has enabled a genome-wide approach to identify DNA footprints. Open chromatin assays such as DNase-Seq and FAIRE-Seq have proven to provide a robust regulatory landscape for many cell types. However, these assays require some downstream bioinformatics analyses in order to provide genome-wide DNA footprints. The computational tools proposed can be categorized in two classes: segmentation-based and site-centric approaches.
Segmentation-based methods are based on the application of Hidden Markov models or sliding window methods to segment the genome into open/closed chromatin region. Examples of such methods are: HINT, Boyle method and Neph method. Site-centric methods, on the other hand, find footprints given the open chromatin profile around motif-predicted binding sites, i.e., regulatory regions predicted using DNA-protein sequence information (encoded in structures such as position weight matrix). Examples of these methods are CENTIPEDE and Cuellar-Partida method.
See also
DNase footprinting
Protein footprinting
Toeprinting assay
References
External links
HINT Website
CENTIPEDE Website
Molecular biology
Laboratory techniques
Molecular biology techniques | DNA footprinting | [
"Chemistry",
"Biology"
] | 1,916 | [
"Biochemistry",
"Molecular biology techniques",
"nan",
"Molecular biology"
] |
5,638,880 | https://en.wikipedia.org/wiki/DEBtox | The DEBtox method for the evaluation of effects of toxicants makes use of the Dynamic Energy Budget (DEB) theory to quantify the effect. See the Organisation for Economic Co-operation and Development (OECD) report, below, for a description of the method.
Toxicants, after they have been taken up by the organism and reached the target site, are assumed to affect one or more metabolic processes as specified in DEB theory. Examples of such processes are the costs for maintenance, assimilation of energy from food, costs for producing somatic tissues, costs for the production of offspring, and hazards to the developing embryo.
A change in a single metabolic process has particular consequences for both growth and reproduction of the organism. Therefore, the specific pattern of growth and reproduction over time provides information about the affected process. In this way, the DEBtox method can be used to explain observed effect patterns over time, as well as the links between effects on body size and reproduction.
A key concept in this method is the determination of the No Effect Concentration. The DEBtox method is able to extract this parameter efficiently from experimental data by making use of knowledge of how effects will show up in the data if they would be present. Not all details of the DEB theory are used in this method; effects on survival as specified by the hazard model hardly uses any detail of the DEB theory, for instance, but is fully consistent with how DEB theory deals with aging as a result of the effects of free radicals.
References
External links
DEBtox information site
Toxicology tests | DEBtox | [
"Environmental_science"
] | 315 | [
"Toxicology tests",
"Toxicology"
] |
5,638,908 | https://en.wikipedia.org/wiki/Far-western%20blot | The far-western blot, or far-western blotting, is a molecular biological method based on the technique of western blot to detect protein-protein interaction in vitro. Whereas western blot uses an antibody probe to detect a protein of interest, far-western blot uses a non-antibody probe which can bind the protein of interest. Thus, whereas western blotting is used for the detection of certain proteins, far-western blotting is employed to detect protein/protein interactions.
Method
In conventional western blot, gel electrophoresis is used to separate proteins from a sample; these proteins are then transferred to a membrane in a 'blotting' step. In a western blot, specific proteins are then identified using an antibody probe.
Far-western blot employs non-antibody proteins to probe the protein of interest on the blot. In this way, binding partners of the probe (or the blotted) protein may be identified. The probe protein is often produced in E. coli using an expression cloning vector.
The probe protein can then be visualized through the usual methods — it may be radiolabelled; it may bear a specific affinity tag like His or FLAG for which antibodies exist; or there may be a protein specific antibody (to the probe protein).
Because cell extracts are usually completely denatured by boiling in detergent before gel electrophoresis, this approach is most useful for detecting interactions that do not require the native folded structure of the protein of interest.
References
External links
Overview at piercenet.com
Overview at utoronto.ca
Molecular biology techniques
Protein methods | Far-western blot | [
"Chemistry",
"Biology"
] | 332 | [
"Biochemistry methods",
"Protein methods",
"Protein biochemistry",
"Molecular biology stubs",
"Molecular biology techniques",
"Molecular biology"
] |
5,638,919 | https://en.wikipedia.org/wiki/Tetracyanoethylene | Tetracyanoethylene (TCNE) is organic compound with the formula . It is a colorless solid, although samples are often off-white. It is an important member of the cyanocarbons.
Synthesis and reactions
TCNE is prepared by brominating malononitrile in the presence of potassium bromide to give the KBr-complex, and dehalogenating with copper.
Oxidation of TCNE with hydrogen peroxide gives the corresponding epoxide, which has unusual properties.
In the presence of base, TCNE reacts with malononitrile to give salts of pentacyanopropenide:
Redox chemistry
TCNE is an electron acceptor. Cyano groups have low energy π* orbitals, and the presence of four such groups, with their π systems (conjugated) to the central double bond, gives rise to an electrophilic alkene. TCNE is reduced at −0.27 V vs ferrocene/ferrocenium:
Because of its ability to accept an electron, TCNE has been used to prepare numerous charge-transfer salts and magnetic molecular materials.
The central C=C distance in TCNE is 135 pm. Upon reduction, this bond elongates to 141–145 pm, depending on the counterion.
Safety
TCNE hydrolyzes in moist air to give hydrogen cyanide and should be handled accordingly.
References
Alkene derivatives
Nitriles
Superconductors | Tetracyanoethylene | [
"Chemistry",
"Materials_science"
] | 299 | [
"Superconductivity",
"Nitriles",
"Functional groups",
"Superconductors"
] |
5,638,989 | https://en.wikipedia.org/wiki/Ovulation%20induction | Ovulation induction is the stimulation of ovulation by medication. It is usually used in the sense of stimulation of the development of ovarian follicles to reverse anovulation or oligoovulation.
Scope
The term ovulation induction can potentially also be used for:
Final maturation induction, in the sense of triggering oocyte release from relatively mature ovarian follicles during late follicular phase. In any case, ovarian stimulation (in the sense of stimulating the development of oocytes) is often used in conjunction with triggering oocyte release, such as for proper timing of artificial insemination.
Controlled ovarian hyperstimulation (stimulating the development of multiple follicles of the ovaries in one single cycle), has also appeared in the scope of ovulation induction. Controlled ovarian hyperstimulation is generally part of in vitro fertilization, and the aim is generally to develop multiple follicles (optimally between 11 and 14 antral follicles measuring 2–8 mm in diameter), followed by transvaginal oocyte retrieval, co-incubation, followed by embryo transfer of a maximum of two embryos at a time.
The treatment for an underlying disease in cases where anovulation or oligovulation is secondary that disease (such as endocrine disease). For example, weight loss results in significant improvement in pregnancy and ovulation rates in anovulatory obese women.
However, this article focuses on medical ovarian stimulation, during early to mid-follicular phase, without subsequent in vitro fertilization, with the aim of developing one or two ovulatory follicles (the maximum number before recommending sexual abstinence).
Indications
Ovulation induction helps reversing anovulation or oligoovulation, that is, helping women who do not ovulate on their own regularly, such as those with polycystic ovary syndrome (PCOS).
Regimen alternatives
The main alternatives for ovulation induction medications are:
Antiestrogen, causing an inhibition of the negative feedback of estrogen on the pituitary gland, resulting in an increase in secretion of follicle-stimulating hormone. Medications in use for this effect are mainly clomifene citrate and tamoxifen (both being selective estrogen-receptor modulators), as well as letrozole (an aromatase inhibitor).
Follicle-stimulating hormone, directly stimulating the ovaries. In women with anovulation, it may be an alternative after 7 to 12 attempted cycles of antiestrogens (as evidenced by clomifene citrate), since the latter ones are less expensive and more easy to control.
Antiestrogens
Clomifene citrate
Clomifene citrate (Clomid is a common brand name) is the medication which is most commonly used to treat anovulation. It is a selective estrogen-receptor modulator, affecting the hypothalamic–pituitary–gonadal axis to respond as if there was an estrogen deficit in the body, in effect increasing the production of follicle-stimulating hormone. It is relatively easy and convenient to use. Clomifene appears to inhibit estrogen receptors in hypothalamus, thereby inhibiting negative feedback of estrogen on production of follicle-stimulating hormone. It may also result in direct stimulation of the hypothalamic–pituitary axis. It also has an effect on cervical mucus quality and uterine mucosa, which might affect sperm penetration and survival, hence its early administration during the menstrual cycle. Clomifene citrate is a very efficient ovulation inductor, and has a success rate of 67%. Nevertheless, it only has a 37% success rate in inducing pregnancy. This difference may be due to the anti-estrogenic effect which clomifene citrate has on the endometrium, cervical mucus, uterine blood flow, as well as the resulting decrease in the motility of the fallopian tubes and the maturation of the oocytes.
Letrozole
Letrozole has been used for ovarian stimulation by fertility doctors since 2001 because it has fewer side-effects than clomiphene and less chance of multiple gestation. A study of 150 babies following treatment with letrozole or letrozole and follicle-stimulating hormone presented at the American Society of Reproductive Medicine 2005 Conference found no difference in overall abnormalities but did find a significantly higher rate of locomotor and cardiac abnormalities among the group having taken letrozole compared to natural conception. A larger, follow-up study with 911 babies compared those born following treatment with letrozole to those born following treatment with clomiphene. That study also found no significant difference in the rate of overall abnormalities, but found that congenital cardiac anomalies was significantly higher in the clomiphene group compared to the letrozole group.
Dosage is generally 2.5 to 7.5 mg daily over 5 days. A higher dose of up to 12.5 mg per day results in increased follicular growth and a higher number of predicted ovulations, without a detrimental effect on endometrial thickness, and is considered in those who do not respond adequately to a lower dose.
Tamoxifen
Tamoxifen affects estrogen receptors in a similar fashion as clomifene citrate. It is often used in the prevention and treatment of breast cancer. It can therefore also be used to treat patients that have a reaction to clomifene citrate.
Follicle-stimulating hormone
Preparations of follicle-stimulating hormone mainly include those derived from the urine of menopausal women, as well as recombinant preparations. The recombinant preparations are more pure and more easily administered, but they are more expensive. The urinary preparations are equally effective and less expensive, but are not as convenient to administer as they are available in vials versus injection pens.
Gonadotropin-releasing hormone pump
The gonadotropin-releasing hormone pump is used to release doses in a pulsatile fashion. This hormone is synthesised by the hypothalamus and induces the secretion of follicle-stimulating hormone by the pituitary gland. Gonadotropin-releasing hormone must be delivered in a pulsatile fashion to imitate the random secretion of the hypothalamus in order to stimulate the pituitary into secreting luteinizing hormone and follicle-stimulating hormone. The gonadotropin-releasing hormone pump is the size of a cigarette box and has a small catheter. Unlike other treatments, using the gonadotropin-releasing hormone pump usually does not result in multiple pregnancies. Filicori from the University of Bologna suggests that this might be because gonadotrophins are absent when the treatment is initiated, and therefore the hormones released by the pituitary (luteinizing hormone and follicle-stimulating hormone) can still take part in the retro-control of gonadotrophin secretion, mimicking the natural cycle. This treatment can also be used for underweight and/or anorexic patients; it has also been used in certain cases of hyperprolactimenia.
National and regional usage
In the Nordic countries, letrozole is practically the standard initial regimen used for ovulation induction, since no formulation of clomifene is registered for use there.
India banned the usage of letrozole in 2011, citing potential risks to infants. In 2012, an Indian parliamentary committee said that the drug controller office colluded with letrozole's makers to approve the drug for infertility in India.
Technique
Although there are many possible additional diagnostic and interventional techniques, protocols for ovulation induction generally consist of:
Determining the first day of the last menstruation, which is termed day 1. In case of amenorrhea, a period can be induced by intake of an oral progestin for 10 days.
Daily administration of the ovulation induction regimen, starting on day 3, 4, or 5, and it is usually taken for 5 days.
Sexual intercourse or artificial insemination by the time of ovulation.
Ultrasonography
During ovulation induction, it is recommended to start at a low dose and monitor the ovarian response with transvaginal ultrasound, including discernment of the number of developing follicles. Initial exam is most commonly started 4–6 days after last pill. Serial transvaginal ultrasound can reveal the size and number of developing follicles. It can also provide presumptive evidence of ovulation such as sudden collapse of the preovulatory follicle, and an increase in fluid volume in the rectouterine pouch. After ovulation, it may reveal signs of luteinization such as loss of clearly defined follicular margins and appearance of internal echoes.
Supernumerary follicles
A cycle with supernumerary follicles is usually defined as one where there are more than two follicles >16 mm in diameter. It is generally recommended to have such cycles cancelled because of the risk of multiple pregnancy (see also the "Risks and side effects" section below). In cancelled cycles, the woman or couple should be warned of the risks in case of supernumerary follicles, and should avoid sexual intercourse or use contraception until the next menstruation. Induction of final maturation (such as done with hCG) may need to be withheld because of increased risk of ovarian hyperstimulation syndrome. The starting dose of the inducing drug should be reduced in the next cycle.
Alternatives to cancelling a cycle are mainly:
Aspiration of supernumerary follicles until one or two remain.
Converting the protocol to IVF treatment with embryo transfer of up to two embryos only.
Selective fetal reduction. This alternative confers a high risk of complications.
Proceeding with any multiple pregnancy without fetal reduction, with the ensuing risk of complications. This alternative is not recommended.
Lab tests
The following laboratory tests may be used to monitor induced cycles:
Serum estradiol levels, starting 4–6 days after last pill
Adequacy of luteinizing hormone surge LH surge by urine tests 3 to 4 days after last clomifene pill
Post-coital test 1–3 days before ovulation to check whether there are at least 5 progressive sperm per HPF
Mid-luteal progesterone, with at least 10 ng/ml 7–9 days after ovulation being regarded as adequate.
Final maturation induction
Final maturation induction and release, such as by human chorionic gonadotropin (HCG or hCG) or recombinant luteinizing hormone, results in a predictable time of ovulation, with the interval from drug administration to ovulation depending on the type of drug. This avails for sexual intercourse or intrauterine insemination to conveniently be scheduled at ovulation, the most likely time to achieve pregnancy.
As evidenced by clomifene-induced cycles, however, triggering oocyte release has been shown to decrease pregnancy chances compared to frequent monitoring with LH surge tests. Therefore, in such cases, triggering oocyte release is best reserved for women who require intrauterine insemination and in whom luteinizing hormone monitoring proves difficult or unreliable. It may also be used when luteinizing hormone monitoring has no shown an luteinizing hormone surge by cycle day 18 (where cycle day 1 is the first day of the preceding menstruation) and there is an ovarian follicle of over 20 mm in size.
Repeat cycles
Ovulation induction can be repeated every menstrual cycle. For clomifene, the dosage may be increased by 50-mg increments in subsequent cycles until ovulation is achieved. However, at a dosage of 200 mg, further increments are unlikely to increase pregnancy chances.
It is not recommended by the manufacturer of clomifene to use it for more than 6 consecutive cycles. In women with anovulation, 7–12 attempted cycles of pituitary feedback regimens (as evidenced by clomifene citrate) are recommended before switching to gonadotrophins, since the latter ones are more expensive and less easy to control.
It is no longer recommended to perform an ultrasound examination to exclude any significant residual ovarian enlargement before each new treatment cycle.
Risks and side effects
Ultrasound and regular hormone checks mitigate risks throughout the process. However, there are still some risks with the procedure.
Ovarian hyperstimulation syndrome occurs in 5–10% of cases. Symptoms depend on whether the case is mild, moderate, or severe, and can range from bloating and nausea, through to shortness of breathe, pleural effusion, and excessive weight gain (more than 2 pounds per day).
Multiple pregnancy
There is also the risk that more than one egg is produced, leading to twins or triplets. Women with polycystic ovary syndrome may be particularly at risk. Multiple pregnancy occurs in approximately 15–20% of cases following cycles induced with gonadotrophins such as human menopausal gonadotropin and follicle-stimulating hormone. The risks associated with multiple pregnancy are much higher than singleton pregnancy; incidence of perinatal death is seven times higher in triplet births and five times higher in twin births than the risks associated with a singleton pregnancy. It is therefore important to adapt the treatment to each individual patient. If more than one or two ovulatory follicles are detected on ultrasonography, sexual abstinence is recommended.
Alternatives
Other treatments for anovulation are mainly:
Weight loss: Obese women are less fertile in both natural and ovulation induction cycles and have higher rates of miscarriage than their counterparts of normal weight; they also require higher doses of ovulation-inducing agents. Weight loss results in significant improvement in pregnancy and ovulation rates in such patients.
In vitro fertilization, including controlled ovarian hyperstimulation.
In vitro maturation is letting ovarian follicles mature in vitro, and this technique can potentially be an alternative both to anovulation reversal and oocyte release triggering. Rather, oocytes can mature outside the body, such as prior to IVF. Hence, no (or at least a lower dose of) gonadotropins have to be injected in the body. However, there still isn't enough evidence to prove the effectiveness and security of the technique.
Laparoscopic ovarian drilling: This 'update' of ovarian wedge resection employs a unipolar coagulating current or puncture of the ovarian surface with a laser in four to ten places to a depth of 4±10 mm on each ovary.
References
Fertility medicine
Assisted reproductive technology | Ovulation induction | [
"Biology"
] | 3,156 | [
"Assisted reproductive technology",
"Medical technology"
] |
5,639,039 | https://en.wikipedia.org/wiki/One-compartment%20kinetics | One-compartment kinetics for a chemical compound specifies that the uptake in the compartment is proportional to the concentration outside the compartment, and the elimination is proportional to the concentration inside the compartment. Both the compartment and the environment outside the compartment are considered to be homogeneous (well mixed).The compartment typically represents some organism (e.g. a fish or a daphnid).
This model is used in the simplest versions of the DEBtox method for the quantification of effects of toxicants.
References
"One-compartment kinetics." British Journal of Anaesthetics. 1992 Oct;69(4):387-96.
Biochemistry | One-compartment kinetics | [
"Chemistry",
"Biology"
] | 135 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry",
"nan"
] |
5,639,053 | https://en.wikipedia.org/wiki/Luigi%20G.%20Napolitano%20Award | The Luigi G. Napolitano Award is presented every year at the International Astronautical Congress. Luigi Gerardo Napolitano was an engineer, scientist and professor.
The award has been presented annually since 1993, to a young scientist, below 30 years of age, who has contributed significantly to the advancement of the aerospace science and has given a paper at the International Astronautical Congress on the contribution.
The Luigi G. Napolitano Award is donated by the Napolitano family and it consists of the Napolitano commemorative medal and a certificate of citation, and is presented by the Education Committee of the IAF.
The International Academy of Astronautics awards the Luigi Napolitano Book Award annually.
Winners
1993 Shin-ichi Nishizawa
1994 Ralph D. Lorenz
1995 O.G.Liepack
1996 W. Tang
1997 G.W.R. Frenken
1998 Michael Donald Ingham
1999 Chris Blanksby
2000 Frederic Monnaie
2001 Noboru Takeichi
2002 Stefano Ferreti
2003 Veronica de Micco
2004 Julie Bellerose
2005 Nicola Baggio
2006 Carlo Menon
2007 Paul Williams
2008 Giuseppe Del Gaudio
2009 Daniel Kwom
2010 Andrew Flasch
2011 Nishchay Mhatre
2012 Valerio Carandente
2013 Sreeja Nag
2014 Alessandro Golkar
2015 Koki Ho
2016 Melissa Mirino
2017 Akshata Krishnamurthy
2018 Peter Z. Schulte
2019 Hao Chen
2020 Elizabeth Barrios
2021 Federica Angeletti
2022 Julia Briden
See also
List of engineering awards
List of physics awards
List of space technology awards
External links
International Astronautical Federation
Award winners IAA
2020 Award winner
2021 Award winner
2022 Award winner
Napolitano
Napolitano
Space-related awards | Luigi G. Napolitano Award | [
"Technology",
"Engineering"
] | 347 | [
"Aerospace engineering awards",
"Space-related awards",
"Science award stubs",
"Aerospace engineering",
"Science and technology awards",
"Physics awards"
] |
11,022,873 | https://en.wikipedia.org/wiki/List%20of%20character%20tables%20for%20chemically%20important%203D%20point%20groups | This lists the character tables for the more common molecular point groups used in the study of molecular symmetry. These tables are based on the group-theoretical treatment of the symmetry operations present in common molecules, and are useful in molecular spectroscopy and quantum chemistry. Information regarding the use of the tables, as well as more extensive lists of them, can be found in the references.
Notation
For each non-linear group, the tables give the most standard notation of the finite group isomorphic to the point group, followed by the order of the group (number of invariant symmetry operations). The finite group notation used is: Zn: cyclic group of order n, Dn: dihedral group isomorphic to the symmetry group of an n–sided regular polygon, Sn: symmetric group on n letters, and An: alternating group on n letters.
The character tables then follow for all groups. The rows of the character tables correspond to the irreducible representations of the group, with their conventional names, known as Mulliken symbols, in the left margin. The naming conventions are as follows:
A and B are singly degenerate representations, with the former transforming symmetrically around the principal axis of the group, and the latter asymmetrically. E, T, G, H, ... are doubly, triply, quadruply, quintuply, ... degenerate representations.
g and u subscripts denote symmetry and antisymmetry, respectively, with respect to a center of inversion. Subscripts "1" and "2" denote symmetry and antisymmetry, respectively, with respect to a nonprincipal rotation axis. Higher numbers denote additional representations with such asymmetry.
Single prime ( ' ) and double prime ( '' ) superscripts denote symmetry and antisymmetry, respectively, with respect to a horizontal mirror plane σh, one perpendicular to the principal rotation axis.
All but the two rightmost columns correspond to the symmetry operations which are invariant in the group. In the case of sets of similar operations with the same characters for all representations, they are presented as one column, with the number of such similar operations noted in the heading.
The body of the tables contain the characters in the respective irreducible representations for each respective symmetry operation, or set of symmetry operations. The symbol i used in the body of the table denotes the imaginary unit: i 2 = −1. Used in a column heading, it denotes the operation of inversion. A superscripted uppercase "C" denotes complex conjugation.
The two rightmost columns indicate which irreducible representations describe the symmetry transformations of the three Cartesian coordinates (x, y and z), rotations about those three coordinates (Rx, Ry and Rz), and functions of the quadratic terms of the coordinates(x2, y2, z2, xy, xz, and yz).
A further column is included in some tables, such as those of Salthouse and Ware For example,
The last column relates to cubic functions which may be used in applications regarding f orbitals in atoms.
Character tables
Nonaxial symmetries
These groups are characterized by a lack of a proper rotation axis, noting that a rotation is considered the identity operation. These groups have involutional symmetry: the only nonidentity operation, if any, is its own inverse.
In the group , all functions of the Cartesian coordinates and rotations about them transform as the irreducible representation.
Cyclic symmetries
The families of groups with these symmetries have only one rotation axis.
Cyclic groups (Cn)
The cyclic groups are denoted by Cn. These groups are characterized by an n-fold proper rotation axis Cn. The C1 group is covered in the nonaxial groups section.
Reflection groups (Cnh)
The reflection groups are denoted by Cnh. These groups are characterized by i) an n-fold proper rotation axis Cn; ii) a mirror plane σh normal to Cn. The C1h group is the same as the Cs group in the nonaxial groups section.
Pyramidal groups (Cnv)
The pyramidal groups are denoted by Cnv. These groups are characterized by i) an n-fold proper rotation axis Cn; ii) n mirror planes σv which contain Cn. The C1v group is the same as the Cs group in the nonaxial groups section.
Improper rotation groups (Sn)
The improper rotation groups are denoted by Sn. These groups are characterized by an n-fold improper rotation axis Sn, where n is necessarily even. The S2 group is the same as the Ci group in the nonaxial groups section. Sn groups with an odd value of n are identical to Cnh groups of same n and are therefore not considered here (in particular, S1 is identical to Cs).
The S8 table reflects the 2007 discovery of errors in older references. Specifically, (Rx, Ry) transform not as E1 but rather as E3.
Dihedral symmetries
The families of groups with these symmetries are characterized by 2-fold proper rotation axes normal to a principal rotation axis.
Dihedral groups (Dn)
The dihedral groups are denoted by Dn. These groups are characterized by i) an n-fold proper rotation axis Cn; ii) n 2-fold proper rotation axes C2 normal to Cn. The D1 group is the same as the C2 group in the cyclic groups section.
{| class="wikitable" style="text-align:center"
! PointGroup !! Canonicalgroup !!Order !! Character Table
|-
| D2 || Z2 × Z2(=D2) || 4
| align="left" |
|-
| D3 || D3 || 6
| align="left" |
{| class="wikitable" style="text-align:center"
| || E || 2 C3
| 3 C'''2 || colspan="2" |
|-
| A1 || 1 || 1 || 1 ||
| x2 + y2, z2
|-
| A2 || 1 || 1 || −1 || Rz, z ||
|-
| E || 2 || −1 || 0 || (Rx, Ry), (x, y)
| (x2 − y2, xy), (xz, yz)
|-
|}
|-
| D4 || D4 || 8
| align="left" |
|-
| D5 || D5 || 10
| align="left" |
|-
| D6 || D6 || 12
| align="left" |
|-
|}
Prismatic groups (Dnh)
The prismatic groups are denoted by Dnh. These groups are characterized by i) an n-fold proper rotation axis Cn; ii) n 2-fold proper rotation axes C2 normal to Cn; iii) a mirror plane σh normal to Cn and containing the C2s. The D1h group is the same as the C2v group in the pyramidal groups section.
The D8h table reflects the 2007 discovery of errors in older references. Specifically, symmetry operation column headers 2S8 and 2S83 were reversed in the older references.
Antiprismatic groups (Dnd)
The antiprismatic groups are denoted by Dnd. These groups are characterized by i) an n-fold proper rotation axis Cn; ii) n 2-fold proper rotation axes C2 normal to Cn; iii) n mirror planes σd which contain Cn. The D1d group is the same as the C2h group in the reflection groups section.
Polyhedral symmetries
These symmetries are characterized by having more than one proper rotation axis of order greater than 2.
Cubic groups
These polyhedral groups are characterized by not having a C5 proper rotation axis.
{| class="wikitable" style="text-align:center"
! PointGroup !! Canonicalgroup !! Order !! Character Table
|-
| T || A4 || 12
| align="left" |
|-
| Td || S4 || 24
| align="left" |
|-
| Th || Z2×A4 || 24
| align="left" |
|-
| O || S4 || 24
| align="left" |
{| class="wikitable"
| || E
| 6 C4
| 3 C2 (C42)
| 8 C3 || 6 C'2
| colspan="2" |
|-
| A1 || 1 || 1 || 1 || 1 || 1 ||
| x2 + y2 + z2
|-
| A2 || 1 || −1 || 1 || 1 || −1 || ||
|-
| E || 2 || 0 || 2 || −1 || 0 ||
| (2 z2 − x2 − y2, x2 − y2)
|-
| T1 || 3 || 1 || −1 || 0 || −1
| (Rx, Ry, Rz), (x, y, z)
|
|-
| T2 || 3 || −1 || −1 || 0 || 1
| || (xy, xz, yz)
|-
|}
|-
| Oh
| Z2×S4 || 48
| align="left" |
|-
|}
Icosahedral groups
These polyhedral groups are characterized by having a C5 proper rotation axis.
Linear (cylindrical) groups
These groups are characterized by having a proper rotation axis C∞ around which the symmetry is invariant to any rotation.
See also
Linear combination of atomic orbitals (molecular orbital method)
Raman spectroscopy
Vibrational spectroscopy (molecular vibration)
List of small groups
Cubic harmonics
Notes
External links
Character Tables for Point Groups used in Chemistry. gernot-katzers-spice-pages.com'' (includes symmetry transformations of Cartesian products up to sixth order)
Further reading
Theoretical chemistry
Physical chemistry
Group theory
Finite groups
Spectroscopy | List of character tables for chemically important 3D point groups | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,129 | [
"Mathematical structures",
"Applied and interdisciplinary physics",
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Finite groups",
"Group theory",
"Spectroscopy",
"Theoretical chemistry",
"Fields of abstract algebra",
"Algebraic structures",
"nan",
"Physical c... |
11,023,071 | https://en.wikipedia.org/wiki/Arxula%20adeninivorans | Arxula adeninivorans (Blastobotrys adeninivorans) is a dimorphic yeast with unusual characteristics. The first description of A. adeninivorans was provided in the mid-eighties. The species was initially designated as Trichosporon adeninovorans. After the first identification in the Netherlands, strains of this species were later on also found in Siberia and in South Africa in soil and in wood hydrolysates. Recently, A. adeninivorans was renamed as Blastobotrys adeninivorans after a detailed phylogenetic comparison with other related yeast species. However, many scientists desire to maintain the popular name A. adeninivorans.
Characteristics
All A. adeninivorans strains share unusual biochemical activities being able to assimilate a range of amines, adenine (hence the name A. adeninivorans) and several other purine compounds as sole energy and carbon source, they all share properties like nitrate assimilation, they are thermo-tolerant (they can grow at temperatures of up to ). A special feature of biotechnological impact is a temperature-dependent dimorphism. At temperatures above a reversible transition from budding cells to mycelial forms is induced. Budding is re-established when cultivation temperature is decreased below .
Biotechnological potential
The unusual characteristics described above render A. adeninivorans very attractive for biotechnological applications. On the one hand, it is a source for many enzymes with interesting properties and the respective genes, for instance glucoamylase, tannase, lipase, phosphatases and many others. On the other hand, it is a very robust and safe organism that can be genetically engineered to produce foreign proteins. Suitable host strains can be transformed with plasmids. The basic design of such plasmids is similar to that described under Hansenula polymorpha and yeast expression platforms.
Here are two special examples of recombinant strains and their application: in both cases several plasmids with different foreign product genes were introduced into the yeast. In a first case this recombinant yeast strain acquired the capability to produce natural plastics, namely PHA (polyhydroxyalkanoates). For this purpose a new synthetic pathway had to be transferred into this organism consisting of three enzymes. The respective genes phbA, phbB and phbC were isolated from the bacterium Ralstonia eutropha and integrated into plasmids. These plasmids were introduced into the organism. The resulting recombinant strain was able to produce the plastic material.
In the second example a biosensor for the detection of estrogenic activities in wastewater has been developed. In this case the route how estrogens act in nature was mimicked. A gene for the human estrogen receptor alpha (hERalpha) contained on a first plasmid was initially introduced. The protein encoded by this gene recognizes and binds estrogens. The complex is then bound to a second gene contained on a second plasmid that becomes activated upon binding. In this case a gene sequence of a reporter gene (the gene product can be easily monitored by simple assays) was fused to a control sequence (a promoter) responsive to the estrogen/receptor complex. Such strains can be cultured in the presence of wastewater and the estrogens present in such samples can be easily quantified by the amount of the reporter gene product.
References
Gellissen G (ed) (2005) Production of recombinant proteins - novel microbial and eukaryotic expression systems. Wiley-VCH, Weinheim.
Yeasts
Fungi described in 1984
Fungi of Africa
Fungi of Europe
Fungus species | Arxula adeninivorans | [
"Biology"
] | 776 | [
"Yeasts",
"Fungi",
"Fungus species"
] |
11,023,489 | https://en.wikipedia.org/wiki/Yeast%20expression%20platform | A yeast expression platform is a strain of yeast used to produce large amounts of proteins, sugars or other compounds for research or industrial uses. While yeast are often more resource-intensive to maintain than bacteria, certain products can only be produced by eukaryotic cells like yeast, necessitating use of a yeast expression platform. Yeasts differ in productivity and with respect to their capabilities to secrete, process and modify proteins. As such, different types of yeast (i.e. different expression platforms) are better suited for different research and industrial applications.
Products
Since the onset of genetic engineering, a number of microorganisms have been developed for the production of biological products. These products are used in medicine and industry to create pharmaceuticals like hepatitis B vaccines or insulin. Common platforms for the development of medicine and other products include the bacterium E. coli, and several yeasts and mammalian cells (including, notably, Chinese hamster ovary cells). In general a microorganism used as an expression platform has to meet several criteria: it should be able grow rapidly in large containers, produce proteins in an efficient way (i.e. with minimal resource input), be safe and, in case of pharmaceuticals, it should produce and modify the products to be as ready for human consumption as possible.
Strains used
Yeasts are common hosts for the production of proteins from recombinant DNA. They offer relatively easy genetic manipulation and rapid growth to high cell densities on inexpensive media. As eukaryotes, they are able to perform protein modifications like glycosylation which are common in eukaryotic cells, but relatively rare in bacteria. Due to this, yeast can produce complex proteins that are identical or very similar to native products from plants or mammals. The first yeast expression platform was based on the baker’s yeast Saccharomyces cerevisiae. However since then a variety of yeast expression platforms have been studied and are widely used for various applications based on their different characteristics and capabilities. For instance some of them grow on a wide range of carbon sources and are not restricted to glucose, as it is the case with baker’s yeast. Several of them are also applied to genetic engineering and to the production of foreign proteins.
Arxula adeninivorans
Arxula adeninivorans (also called Blastobotrys adeninivorans) is a dimorphic yeast, meaning it grows as a budding yeast up to a temperature of 42 °C, but as a filamentous form at higher temperatures. A. adeninivorans has unusual biochemical characteristics. It can grow on a wide range of substrates and can assimilate nitrate. Strains of A. adeninivorans have been developed that can produce natural plastics, and have been involved in the development of a biosensor for estrogens in environmental samples.
Candida boidinii
Candida boidinii is a yeast notable for its ability to grow on methanol (called methylotrophism). Like other methylotrophic species such as Hansenula polymorpha and Pichia pastoris, it is used as a platform for the production of foreign proteins. Yields in a multigram range of a secreted foreign protein have been reported.
A computational method, IPRO, recently predicted mutations that experimentally switched the cofactor specificity of Candida boidinii xylose reductase from NADPH to NADH.
Ogataea polymorpha
Ogataea polymorpha (synonyms Hansenula polymorpha or Pichia angusta) is another methylotrophic yeast (see Candida boidinii). It can grow on a wide range of other substrates; it is thermo-tolerant and can assimilate nitrate (see also Kluyveromyces lactis). It has been applied to the production of hepatitis B vaccines, insulin and interferon alpha-2a for the treatment of hepatitis C, as well as to a range of technical enzymes.
Kluyveromyces lactis
Kluyveromyces lactis is a yeast regularly used for the production of kefir. It can grow on several sugars, most importantly on lactose which is present in milk and whey. It has successfully been applied among others to the production of chymosin (an enzyme that is usually present in the stomach of calves) for the production of cheese. Production takes place in fermenters on a 40,000 L scale.
Pichia pastoris
Pichia pastoris is a methylotrophic yeast (see Candida boidinii and Hansenula polymorpha). It provides an efficient platform for the production of foreign proteins. Platform elements are available as a kit and it is worldwide used in academia for the production of proteins. Strains have been engineered that can produce complex human N-glycan (yeast glycans are similar but not identical to those found in humans).
Saccharomyces cerevisiae
Saccharomyces cerevisiae is the traditional baker’s yeast used widely in brewing and baking. Often the collective term “yeast” is used for this single species. As an expression platform it has successfully been applied to the production of technical enzymes and of pharmaceuticals like insulin and hepatitis B vaccines.
Yarrowia lipolytica
Yarrowia lipolytica is a dimorphic yeast (see Arxula adeninivorans) that can grow on a wide range of substrates. As such, it has a high potential for industrial applications but there are no recombinant products commercially available yet.
Use
The various yeast expression platforms differ in several characteristics, including their productivity and with respect to their capabilities to secrete, to process and to modify proteins in particular examples. However, uses of all expression platforms have some basic similarities.
In order to produce a desired product, suitable yeast strains are transformed with a vector that contains all necessary genetic elements for production of a biological product of interest. Vectors must also contain a selection marker, which is required to select yeast which have successfully taken up the vector from those which have not. Vectors also contain certain DNA elements allowing the yeast to incorporate the foreign DNA into the chromosome of the yeast and to replicate it. Most importantly, vectors contain a segment responsible for the production of the desired compound, called an expression cassette. The cassette contains a sequence of regulatory elements that control how much and under which circumstances a certain product is eventually made. This is followed by the gene for the biological product itself. The expression cassette is terminated by a terminator sequence that stops of the transcription of the expressed gene.
References
Gellissen G (ed) (2005) Production of recombinant proteins - novel microbial and eukaryotic expression systems. Wiley-VCH, Weinheim.
Yeasts | Yeast expression platform | [
"Biology"
] | 1,404 | [
"Yeasts",
"Fungi"
] |
11,023,988 | https://en.wikipedia.org/wiki/TSX-32 | TSX-32 has been a general purpose 32-bit multi-user multitasking operating system for x86 architecture platform, with a command line user interface. It is compatible with some 16-bit DOS applications and supports file systems FAT16 and FAT32. It was developed by S&H Computer Systems, and has been available since 1989.
DEC-oriented columnist Kevin G. Barkes noted that TSX-32 is "not a port of the PDP-11 TSX-Plus" and that it runs
well on 386, 486 and Pentium-based systems. He reported a limitation: since it supports the MS-DOS FAT file system, filenames are 8.3.
TSX-Plus
An earlier non-DEC operating system, also from S&H, was named TSX-Plus. Released in 1980, TSX-Plus was the successor to TSX, released in 1976.
The strength of TSX-Plus is to simultaneously provide to multiple users the services of DEC's single-user RT-11. Depending on which PDP-11 model and the amount of memory, the system could support a minimum of 12 users (14-18 users on a 2 MB 11/73, depending on workload). A productivity feature called "virtual lines" "allows a single user to control several tasks from a single terminal."
History
S&H wrote the original TSX because "Spending $25K on a computer that could only support one user bugged" (founder Harry Sanders); the outcome was the initial four-user TSX in 1976.
For TSX-32, they said in an interview, "We started with a clean sheet of paper" rather than starting with a "port."
As of 2021, it appears to be defunct.
VAX
The company's product line was ported/expanded for the VAX line.
See also
Multiuser DOS Federation
References
External links
TSX-32 official description page
X86 operating systems
DOS variants
1989 software | TSX-32 | [
"Technology"
] | 410 | [
"Operating system stubs",
"Computing stubs"
] |
11,024,056 | https://en.wikipedia.org/wiki/Anti-torpedo%20bulge | The anti-torpedo bulge (also known as an anti-torpedo blister) is a form of defence against naval torpedoes occasionally employed in warship construction in the period between the First and Second World Wars. It involved fitting (or retrofitting) partially water-filled compartmentalized sponsons on either side of a ship's hull, intended to detonate torpedoes, absorb their explosions, and contain flooding to damaged areas within the bulges.
Application
Essentially, the bulge is a compartmentalized, below the waterline sponson isolated from the ship's internal volume. It is part air-filled, and part free-flooding. In theory, a torpedo strike will rupture and flood the bulge's outer air-filled component while the inner water-filled part dissipates the shock and absorbs explosive fragments, leaving the ship's main hull structurally intact. Transverse bulkheads within the bulge limit flooding to the damaged area of the structure.
The bulge was developed by the British Director of Naval Construction, Eustace Tennyson-D'Eyncourt, who had four old Edgar-class protected cruisers so fitted in 1914. These ships were used for shore bombardment duties, and so were exposed to inshore submarine and torpedo boat attack. Grafton was torpedoed in 1917, and apart from a few minor splinter holes, the damage was confined to the bulge and the ship safely made port. Edgar was hit in 1918; this time damage to the elderly hull was confined to dented plating.
The Royal Navy had all new construction fitted with bulges from 1914, beginning with the Revenge-class battleships and Renown-class battlecruisers. It also had its large monitors fitted with enormous bulges. This was fortunate for Terror, which survived three torpedoes striking the hull forward, and for her sister Erebus, which survived a direct hit from a remotely-controlled explosive motor boat that ripped off of her bulge. On the other hand, the bulges to nearly led to a disaster in Dover Harbour on 11 September 1918. Glatton caught fire in her cordite magazine and had the potential to explode in proximity to a loaded ammunition ship. The admiral on hand ordered the monitor scuttled to prevent a catastrophic explosion. The first attempt to do so with torpedoes failed due to the protective effect of the bulges. Half an hour later, a larger, more powerful torpedo was able to sink Glatton by striking the hole caused by the initial, ineffective hit.
Older ships also had bulges incorporated during refit, such as the U.S. Navy's , laid down during World War I and retrofitted 1929–31. Japan's Yamashiro had them added in 1930.
Later designs of bulges incorporated various combinations of air and water filled compartments and packing of wood and sealed tubes. As bulges increased a ship's beam, they caused a reduction in speed, which is a function of the length-to-beam ratio. Therefore, various combinations of narrow and internal bulges appeared throughout the 1920s and into the 1930s. The external bulge had disappeared from construction in the 1930s, being replaced by internal arrangements of compartments with a similar function. An additional reason for the bulges' obsolescence was advances in torpedo design. In particular, deployment of magnetic pistol and magnetic proximity fuze in the early 1940s allowed torpedoes to run beneath a target's hull and explode there, beyond the bulges, rather than needing to strike the side of the ship directly. However, older ships were still being fitted with new external bulges through World War II, particularly US ships. In some cases this was to restore buoyancy to compensate for wartime weight additions, as well as for torpedo protection.
See also
Torpedo belt, a later development of torpedo defense system. Essentially a torpedo bulge built on the inside of the hull so as to not protrude and cause unnecessary drag.
Torpedo bulkhead
Torpedo net, earlier torpedo defense system – far more effective, but could only be used whilst stationary.
Spaced armor, a similar concept used primarily on tanks and armored cars.
Footnotes
Citations
Bibliography
External links
St. Petersburg Daily Times – Mar 2, 1919 – Blister stops explosion of sub torpedoes
A detailed discussion of the evolution of Torpedo defense systems ~WWII
Naval armour
Naval architecture
Anti-submarine warfare | Anti-torpedo bulge | [
"Engineering"
] | 877 | [
"Naval architecture",
"Marine engineering"
] |
11,024,236 | https://en.wikipedia.org/wiki/AOS/VS%20II | AOS/VS II is a discontinued operating system for the Data General 32-bit MV/Eclipse computers.
Overview
The AOS/VS II operating system was released in 1988 and was originally to be simply rev 8.00 of the AOS/VS operating system. However, it introduced a new file system which was not compatible with the original AOS and AOS/VS file system and also contained new features like Access control list (ACL) groups. Since some customers did not want to upgrade to the new file system, or invest in new hardware, Data General agreed to continue bug-fix support of an “immortal” revision of AOS/VS, which became known colloquially as AOS/VS “Classic”, while new development would proceed as AOS/VS II, with revision numbers rolled back to 1.00.
Both VS-Classic (rev 7.7x) and VS-II (rev. 3.2x) were updated to survive the Year 2000 problem, although by this time both were obsolescent.
Among the other new features that were part of AOS/VS II were a full TCP/IP stack, NFS support, expanded kernel address space using ring-1 and a logical disk-level user data cache. /VS (classic) had a file system metadata cache, but no user data cache.
See also
Data General RDOS
Proprietary operating systems
Data General | AOS/VS II | [
"Technology"
] | 288 | [
"Operating system stubs",
"Computing stubs"
] |
11,024,418 | https://en.wikipedia.org/wiki/M2%204.2-inch%20mortar | The M2 4.2-inch mortar was a U.S. rifled 4.2-inch (107 mm) mortar used during the Second World War, the Korean War, and the Vietnam War. It entered service in 1943. It was nicknamed the "Goon Gun" (from its large bullet-shaped shells, monopod, and rifled bore) or the "Four-Deuce" (from its bore size in inches). In 1951, it began to be phased out in favor of the M30 mortar of the same caliber.
History
The first mortar in U.S. service was introduced in 1928 and was designated the M1 Chemical Mortar. Development began in 1924 from the British 4-inch (102 mm) Mk I smooth-bore mortar. The addition of rifling increased the caliber to . The M1 fired chemical shells to a range of . It was ostensibly meant to fire only smoke shells, as the postwar peace lobby opposed military spending on explosive or poison gas shells.
The M2 could be disassembled into three parts to allow it to be carried by its crew. The mortar tube weighed , including a screw-in cap at the bottom. The cap contained a built-in fixed firing pin. The standard, a recoiling hydraulic monopod that could be adjusted for elevation, weighed . The baseplate had long handles on either side to make it easier to carry; it weighed .
Upon the entry of the United States into World War II, the U.S. Army decided to develop a high explosive round for the mortar so that it could be used in a fragmentation role against enemy personnel. In order to extend the range to , more propellant charge was used and parts of the mortar were strengthened. Eventually, the range of the mortar was extended to . The modified mortar was redesignated the M2. The M2 was first used in the Sicilian Campaign, where some 35,000 rounds of ammunition were fired from the new weapon. Subsequently, the mortar proved to be an especially useful weapon in areas of rough terrain such as mountains and jungle, into which artillery pieces could not be moved. The M2 was gradually replaced in U.S. service from 1951 by the M30 mortar.
Starting in December 1942, the US Army experimented with self-propelled mortar carriers. Two pilot vehicles based on the M3A1 halftrack were built, designated Mortar Carriers T21 and T21E1. The program was cancelled in 1945.
Before the invasion of Peleliu in September, 1944, the U.S. Navy mounted three mortars each on the decks of four Landing Craft Infantry and designated them LCI(M). They provided useful fire support in situations where conventional naval gunfire, with its flat trajectory, was not effective. Increased numbers of LCI(M) were used in the invasions of the Philippines and Iwo Jima. Sixty LCI(M) were used during the invasion of Okinawa and adjoining islands with Navy personnel operating the mortars.
Tactical organization
mortars were employed by chemical mortar battalions. Each battalion was authorized forty-eight M2 mortars organized into four companies with three four-tube platoons. Between December 1944 and February 1945, the battalions’ Companies D were inactivated to organize additional battalions. In World War II, an infantry division was often supported by one or two chemical mortar companies with twelve mortars each. In some instances an entire battalion was attached to a division. In the Korean War, an organic heavy mortar company of eight mortars was assigned each infantry regiment while Marine regiments had a mortar company with twelve mortars.
Ammunition
The M2 has a rifled barrel, unusual for a mortar. Thus its ammunition lacks stabilizing tailfins common to most mortars.
The mortar's M3 high explosive (HE) shell packed of explosive charge, placing it between the M1 105-mm HE shell ( of charge) and M102 155-mm HE shell ( of charge) in terms of blast effect. The mortar could also fire white phosphorus-based smoke shells and mustard gas shells. The official designation of the latter was Cartridge, Mortar, 4.2-inch. Mustard gas was not used in these wars and the U.S. ended up with a large number of these shells, declaring over 450,000 of them in stockpile in 1997 when the Chemical Weapons Treaty came into force. Destruction efforts to eliminate this stockpile are continuing with a few of these aged shells occasionally found to be leaking.
Users
: used
: used
See also
Weapons of comparable role, performance and era
ML 4.2-inch mortar – British mortar.
107mm M1938 mortar – Soviet mortar.
Notes
Notes
References
Infantry Weapons of the KOREAN WAR Mortars: 4.2-inch M2 Mortar
History of the 4.2-inch mortar
Jane's Infantry Weapons 1984–1985, Ian Hogg (ed.), London: Jane's Publishing Company Ltd., 1984. .
Army Service Forces Catalog CW 11-1
External links
Popular Science, April 1940, Army's Smoke Throwers early detailed article on 4.2 mortar
Adding Firepower to Infantry: The 4.2-Inch Chemical Mortar – by Christopher Miskimon, courtesy of the Warfare History Network
World War II infantry weapons of the United States
World War II artillery of the United States
Infantry mortars
107 mm artillery
Mortars of the United States
Chemical weapons of the United States
Chemical weapon delivery systems
World War II mortars
Weapons and ammunition introduced in 1943 | M2 4.2-inch mortar | [
"Chemistry"
] | 1,100 | [
"Chemical weapon delivery systems",
"Chemical weapons"
] |
11,024,738 | https://en.wikipedia.org/wiki/Slitless%20spectroscopy | Slitless spectroscopy is spectroscopy done without a small slit to allow only light from a small region to be diffracted. It works best in sparsely populated fields, as it spreads each point source out into its spectrum, and crowded fields can be too confused to be useful for some applications. It also faces the problem that for extended sources, nearby emission lines will overlap. This technique is a basic form of snapshot hyperspectral imaging. Slitless spectroscopy is used for astronomical surveys and in fields, such as solar physics, where time evolution is important. Both types of application benefit from higher speed operation of a slitless spectrograph: conventional spectrographs require multiple exposures, scanning the slit across the target, to acquire a complete spectral image, while a slitless spectrograph can capture a complete image plane in one exposure.
The Crossley telescope utilized a slitless spectrograph that was originally employed by Nicholas Mayall.
The Henry Draper Catalogue, published 1924, contains stellar classifications for hundreds of thousands of stars, based on spectra taken with the objective prism method at Harvard College Observatory. The work of classification was led initially by Williamina Fleming and later by Annie Jump Cannon, with contributions from many other female astronomers including Florence Cushman.
Slitless spectrographs encounter an unusual form of specular reflection at the grating, which leads to anisotropic image distortion called Littrow expansion or compression. The distortion occurs because the normal rules of specular reflection do not apply to reflective gratings operated far from the non-dispersive reflection angle.
See also
Echelle grating
Fine Guidance Sensor and Near Infrared Imager and Slitless Spectrograph (JWST component)
References
Astronomical spectroscopy | Slitless spectroscopy | [
"Physics",
"Chemistry"
] | 350 | [
"Astronomical spectroscopy",
"Spectroscopy",
"Spectrum (physical sciences)",
"Astrophysics"
] |
11,025,494 | https://en.wikipedia.org/wiki/Internationalization%20Tag%20Set | The Internationalization Tag Set (ITS) is a set of attributes and elements designed to provide internationalization and localization support in XML documents.
The ITS specification identifies concepts (called "ITS data categories") which are important for internationalization and localization. It also defines implementation of these concepts through a set of elements and attributes grouped in the ITS namespace. XML developers can use this namespace to integrate internationalization features directly into their own XML schemas and documents.
Overview
ITS v1.0 includes seven data categories:
Translate: Defines what parts of a document are translatable or not.
Localization Note: Provides alerts, hints, instructions, or other information to help the localizers or the translators.
Terminology: Indicates which parts of the documents are terms and optionally points to information about these terms.
Directionality: Indicates what type of display directionality should be applied to parts of the document.
Ruby: Indicates what parts of the document should be displayed as ruby text. (Ruby is a short run of text alongside a base text, typically used in East Asian documents to indicate pronunciation or to provide a brief annotation).
Language Information: Identifies the language of the different parts of the document.
Elements Within Text: Indicates how elements should be treated with regard to linguistic segmentation.
The vocabulary is designed to address two different aspects: First by providing markup usable directly in the XML documents. Second, by offering a way to indicate if there are parts of a given markup that correspond to some of the ITS data categories and should be treated as such by ITS processors.
ITS applies to both new document types as well as existing ones. It also applies to both markups without any internationalization features as well documents already supporting internationalization or localization-related functions.
ITS can be specified using global rules and local rules.
The global rules are expressed anywhere in the document (embedded global rules), or even outside the document (external global rules), using the its:rules element.
The local rules are expressed by specialized attributes (and sometimes elements) specified inside the document instance, at the location where they apply.
Examples
Translate data category
In the following ITS markup example, the elements and attributes with the its prefix are part of the ITS namespace. The its:rules element lists the different rules to apply to this file. There is one its:translateRule rule that indicates that any content inside the head element should not be translated.
The its:translate attributes used in some elements are utilized to override the global rule. Here, to make translatable the content of title and to make non-translatable the text "faux pas".
<text xmlns:its="http://www.w3.org/2005/11/its">
<head>
<revision>2006-09-10 v5</revision>
<author>Gerson Chicareli</author>
<contact>someone@example.com</contact>
<title
its:translate="yes">The Origins of Modern Novel</title>
<its:rules version="1.0">
<its:translateRule translate="no" selector="/text/head"/>
</its:rules>
</head>
<body>
<div xml:id="intro">
<head>Introduction</head>
<p>It would certainly be quite a <span its:translate="no">faux
pas</span> to start a dissertation on the origin of modern novel without
mentioning the <tl>HKLM of GFDL</tl>...</p>
</div>
</body>
</text>
Localization Note data category
In the following ITS markup example, the its:locNote element specifies that any node corresponding to the XPath expression "//msg/data" has an associated note. The location of that note is expressed by the locNotePointer attribute, which holds a relative XPath expression pointing to the node where the note is, here ="../notes".
Note also the use of the its:translate attribute to mark the notes elements as non-translatable.
<Res xmlns:its="http://www.w3.org/2005/11/its">
<prolog>
<its:rules version="1.0">
<its:translateRule selector="//msg/notes" translate="no"/>
<its:locNoteRule locNoteType="description" selector="//msg/data" locNotePointer="../notes"/>
</its:rules>
</prolog>
<body>
<msg id="FileNotFound">
<notes>Indicates that the resource file {0} could not be loaded.</notes>
<data>Cannot find the file {0}.</data>
</msg>
<msg id="DivByZero">
<notes>A division by 0 was going to be computed.</notes>
<data>Invalid parameter.</data>
</msg>
</body>
</Res>
ITS limitations
ITS does not have a solution to all XML internationalization and localization issues.
One reason is that version 1.0 does not have data categories for everything. For example, there is currently no way to indicate a relation source/target in bilingual files where some parts of a document store the source text and some other parts the corresponding translation.
The other reason is that many aspects of internationalization cannot be resolved with markup. This is due to the design of the DTD or the schema itself. There are best practices, design and authoring guidelines help make documents are correctly internationalized and easy to localize. For example, using attributes to store translatable text is a bad idea for many different reasons, but ITS cannot prevent an XML developer from making such choice.
Some of the ITS 1.0 limitations are being addressed in the version 2.0: See http://www.w3.org/TR/its20/ for more details.
References
External links
Internationalization Tag Set (ITS) Version 1.0
Internationalization Tag Set (ITS) Version 2.0
W3C Internationalization Home
Best Practices for XML Internationalization
List of ITS 1.0 implementations and articles about ITS
List of ITS 2.0 implementations
XML
Markup languages
World Wide Web Consortium standards
Technical communication
Computer file formats
Open formats
Internationalization and localization | Internationalization Tag Set | [
"Technology"
] | 1,369 | [
"Natural language and computing",
"Internationalization and localization"
] |
11,025,540 | https://en.wikipedia.org/wiki/Piezomagnetism | Piezomagnetism is a phenomenon observed in some antiferromagnetic and ferrimagnetic crystals. It is characterized by a linear coupling between the system's magnetic polarization and mechanical strain. In a piezomagnetic material, one may induce a spontaneous magnetic moment by applying mechanical stress, or a physical deformation by applying a magnetic field.
Piezomagnetism differs from the related property of magnetostriction; if an applied magnetic field is reversed in direction, the strain produced changes signs. Additionally, a non-zero piezomagnetic moment can be produced by mechanical strain alone, at zero fields, which is not true of magnetostriction. According to the Institute of Electrical and Electronics Engineers (IEEE): "Piezomagnetism is the linear magneto-mechanical effect analogous to the linear electromechanical effect of piezoelectricity. Similarly, magnetostriction and electrostriction are analogous second-order effects. These higher-order effects can be represented as effectively first-order when variations in the system parameters are small compared with the initial values of the parameters".
The piezomagnetic effect is made possible by an absence of certain symmetry elements in a crystal structure; specifically, symmetry under time reversal forbids the property.
The first experimental observation of piezomagnetism was made in 1960, in the fluorides of cobalt and manganese.
The strongest piezomagnet known is uranium dioxide, with magnetoelastic memory switching at magnetic fields near 180,000 Oe at temperatures below 30 kelvins.
References
Magnetic ordering
Transducers | Piezomagnetism | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 334 | [
"Magnetic ordering",
"Condensed matter physics",
"Electric and magnetic fields in matter",
"Materials science"
] |
11,025,622 | https://en.wikipedia.org/wiki/Airline%20reservations%20system | Airline reservation systems (ARS) are systems that allow an airline to sell their inventory (seats). It contains information on schedules and fares and contains a database of reservations (or passenger name records) and of tickets issued (if applicable). ARSs are part of passenger service systems (PSS), which are applications supporting the direct contact with the passenger.
ARS eventually evolved into the computer reservations system (CRS). A computer reservation system is used for the reservations of a particular airline and interfaces with a global distribution system (GDS) which supports travel agencies and other distribution channels in making reservations for most major airlines in a single system.
Overview
Airline reservation systems incorporate airline schedules, fare tariffs, passenger reservations and ticket records. An airline's direct distribution works within their own reservation system, as well as pushing out information to the GDS. The second type of direct distribution channel are consumers who use the internet or mobile applications to make their own reservations. Travel agencies and other indirect distribution channels access the same GDS as those accessed by the airline reservation systems, and all messaging is transmitted by a standardized messaging system that functions on two types of messaging that transmit on SITA's high level network (HLN). These messaging types are called Type A [usually EDIFACT format] for real time interactive communication and Type B [TTY] for informational and booking type of messages. Message construction standards set by IATA and ICAO, are global, and apply to more than air transportation. Since airline reservation systems are business critical applications, and they are functionally quite complex, the operation of an in-house airline reservation system is relatively expensive.
Prior to deregulation, airlines owned their own reservation systems with travel agents subscribing to them. Today, the GDS are run by independent companies with airlines and travel agencies being major subscribers.
As of February 2009, there are only a few major GDS providers in the market: Amadeus, Travelport (which operates the Apollo, Worldspan and Galileo systems), Sabre, InteliSys Aviation (which owns ameliaRES PSS) and Shares. There is one major Regional GDS, Abacus, serving the Asian market and a number of regional players serving single countries, including Travelsky (China), ORS (Russia), Infini and Axess (both Japan) and Topas (South Korea). Of these, Infini is hosted within the Sabre complex, Axess is in the process of moving into a partition within the Worldspan complex, and Topas agencies will be migrating into Amadeus.
Reservation systems may host "ticket-less" airlines and "hybrid" airlines that use e-ticketing in addition to ticket-less to accommodate code-shares and interlines.
In addition to these "standardized" GDS, some airlines have proprietary versions which they use to run their flight operations. A few examples are Delta's OSS and Deltamatic systems and EDS SHARES. SITA Reservations remains the largest neutral multi-host airline reservations system, with over 100 airlines currently managing inventory.
Inventory management
In the airline industry, available seats are commonly referred to as inventory. The inventory of an airline is generally classified into service classes (e.g. economy, premium economy, business or first class) and any number of fare classes, to which different prices and booking conditions may apply. Fare classes are complicated and vary from airline to airline, often indicated by a one letter code. The meaning of these codes are not often known by the passenger, but conveys information to airline staff, for example they may indicate that a ticket was fully paid, or discounted or purchased through a loyalty scheme, etc. Some seats may not be available for open sale, but reserved for example for connecting flight or loyalty scheme passengers. Overbooking is also a common practice, and is an exception to inventory management principles. One of the core functions of inventory management is inventory control. Inventory control monitors how many seats are available in the different fare classes, and by opening and closing individual fare classes for sale.
A flight schedule management system forms the foundation of the inventory management system. Besides other functions, it is critical for ticket sales, crew member assignments, aircraft maintenance, airport coordination, and connections to partner airlines. The schedule system monitors what and when aircraft will be available on particular routes, and their internal configuration. Inventory data is imported and maintained from the schedule distribution system. Changes to aircraft availability would immediately impact the available seats of the fleet, as well as the seats which had been sold.
The price for each sold seat is determined by a combination of the fares and booking conditions stored in the Fare Quote System,. In most cases, inventory control has a real time interface to an airline's yield management system to support a permanent optimization of the offered booking classes in response to changes in demand or pricing strategies of competitors.
Availability display and reservation (PNR)
Users access an airline's inventory through an availability display. It contains all offered flights for a particular city-pair with their available seats in the different booking classes. This display contains flights which are operated by the airline itself as well as code share flights which are operated in co-operation with another airline. If the city pair is not one on which the airline offers service, it may display a connection using its own flights or display the flights of other airlines. The availability of seats of other airlines is updated through standard industry interfaces. Depending on the type of co-operation, it supports access to the last seat (last seat availability) in real-time. Reservations for individual passengers or groups are stored in a so-called passenger name record (PNR). Among other data, the PNR contains personal information such as name, contact information or special services requests (SSRs) e.g. for a vegetarian meal, as well as the flights (segments) and issued tickets. Some reservation systems also allow to store customer data in profiles to avoid data re-entry each time a new reservation is made for a known passenger. In addition, most systems have interfaces to CRM systems or customer loyalty applications (aka frequent traveler systems). Before a flight departs, the so-called passenger name list (PNL) is handed over to the departure control system that is used to check-in passengers and baggage. Reservation data such as the number of booked passengers and special service requests is also transferred to flight operations systems, crew management and catering systems. Once a flight has departed, the reservation system is updated with a list of the checked-in passengers (e.g. passengers who had a reservation but did not check in (no shows) and passengers who checked in, but did not have a reservation (go shows)). Finally, data needed for revenue accounting and reporting is handed over to administrative systems.
Fare quote and ticketing
The Fares data store contains fare tariffs, rule sets, routing maps, class of service tables, and some tax information that construct the price – "the fare". Rules like booking conditions (e.g. minimum stay, advance purchase, etc.) are tailored differently between different city pairs or zones, and assigned a class of service corresponding to its appropriate inventory bucket. Inventory control can also be manipulated manually through the availability feeds, dynamically controlling how many seats are offered for a particular price by opening and closing particular classes.
The compiled set of fare conditions is called a fare basis code. There are two systems set up for the interchange of fares data — ATPCO and SITA, plus some system to system direct connects. This system distributes the fare tariffs and rule sets to all GDSs and other subscribers. Every airline employs staff who code air fare rules in accordance with yield management intent. There are also revenue managers who watch fares as they are filed into the public tariffs and make competitive recommendations. Inventory control is typically manipulated from here, using availability feeds to open and close classes of service.
The role of the ticketing complex is to issue and store electronic ticket records and the very small number of paper tickets that are still issued. Miscellaneous charges order (MCO) is still a paper document; IATA has working groups defining the replacement document the electronic multipurpose document (EMD) as at 2010. The electronic ticket information is stored in a database containing the data that historically was printed on a paper ticket including items such as the ticket number, the fare and tax components of the ticket price or exchange rate information. In the past, airlines issued paper tickets; since 2008, IATA has been supporting a resolution to move to 100% electronic ticketing. So far, the industry has not been able to comply due to various technological and international limitations. The industry is at 98% electronic ticket issuance today, although electronic processing for MCOs was not available in time for the IATA mandate.
Notable systems
History
American Airlines and Teleregister Company developed a number of automated airline booking systems known as Reservisor. it first version was an electromechanical version of the flight boards introduced for the "sell and report" system that was installed in American's Boston reservation office in February 1946. These simple vacuum tube and electromechanical computers were based on telephone switching systems made by Teleregister.
In the late 1950s, the American Airles wanted a system that would allow real-time access to flight details in all of its offices, and the integration and automation of its booking and ticketing processes. It introduced an electronic reservations system, Magnetronic Reservisor, in 1952.
The first computerized booking system was the little-known Trans-Canada Air Lines (today's Air Canada) system, ReserVec developed by Ferranti Canada . It started to be delivered in April 1961 and by January 24, 1963 completed the airline switch-over from the manual systems.
Shortly after, in 1962 another computerized reservation system began to be delivered to United Airlines which was one of the largest computer systems at that time, controlling 60 cities in a communication system that provided one second response time. Developed by Evelyn Berezin at the Teleregister Company, it was an update to the era of the transistor of its line of Reservisor systems making them now fully electronic.
In 1964, American Airlines developed the Sabre (Semi-Automated Business Research Environment) using IBM hardware. Sabre's breakthrough was its ability to keep inventory correct in real time, accessible to agents around the world.
The deregulation of the airline industry, in the Airline Deregulation Act, meant that airlines, which had previously operated under government-set fares ensuring airlines at least broke even, now needed to improve efficiency to compete in a free market. In this deregulated environment, the ARS and its descendants became vital to the travel industry.
See also
USAS (application)
List of global distribution systems
Further reading
Winston, Clifford, "The Evolution of the Airline Industry", Brookings Institution Press, 1995. . Cf. p. 61–62, Computer Reservation Systems.
Wardell, David J, "Airline Reservation Systems", 1991. Research paper.
References
Airline reservation system: All you need to know
Reservation System, Airline
Travel technology | Airline reservations system | [
"Technology"
] | 2,259 | [
"Computer reservation systems",
"Computer systems"
] |
11,025,873 | https://en.wikipedia.org/wiki/Carl%20Sagan%20Award%20for%20Public%20Appreciation%20of%20Science | The Carl Sagan Award for Public Understanding of Science is an award presented by the Council of Scientific Society Presidents (CSSP) to individuals who have become “concurrently accomplished as researchers and/or educators, and as widely recognized magnifiers of the public's understanding of science.” The award was first presented in 1993 to astronomer Carl Sagan (1934–1996), who is also the award's namesake.
Winners
1993: Carl Sagan, Laboratory for Planetary Studies, Cornell University
1994: E. O. Wilson, Curator, Museum of Comparative Zoology, Harvard University
1995: National Geographic Society and National Geographic Magazine: Gilbert Hovey Grosvenor and William Allen
1996: PBS Nova and Paula Apsell
1997: Bill Nye, Bill Nye the Science Guy
1998: Alan Alda, John Angier, Graham Chedd, PBS Scientific American Frontiers
1999: Richard Harris; Ira Flatow, National Public Radio
2000: John Rennie, Scientific American
2001: John Noble Wilford, "Science Times" of the New York Times
2002: Philip G. Zimbardo, PBS Discovering Psychology
2003: Island Press
2004: Popular Science
2005: Cheryl Heuton and Nicolas Falacci, creators of Numb3rs
2006: Court TV
2007: Kenneth R. Weiss and Usha Lee McFarling, Los Angeles Times
2009: Thomas Friedman, The New York Times
2010: Sylvia Earle, National Geographic Society
2013: Bassam Shakhashiri, American Chemical Society
2017: Charles Bolden, Former Administrator – National Aeronautics and Space Administration
2018: Steven Pinker
2019: William S. Hammack
References
External links
Carl Sagan Award for Public Understanding of Science
Carl Sagan Award for Public Appreciation of Science
Science communication awards
Carl Sagan | Carl Sagan Award for Public Appreciation of Science | [
"Technology"
] | 352 | [
"Science and technology awards",
"Science communication awards"
] |
11,025,882 | https://en.wikipedia.org/wiki/Pan-American%20Journal%20of%20Aquatic%20Sciences | The Pan-American Journal of Aquatic Sciences is a peer-reviewed open access scientific journal. It covers research on all aspects of the aquatic sciences. Articles are published in English, Spanish, or Portuguese.
Abstracting and indexing
The journal is abstracted and is indexed in Aquatic Sciences and Fisheries Abstracts and Scopus.
References
External links
Ecology journals | Pan-American Journal of Aquatic Sciences | [
"Environmental_science"
] | 70 | [
"Environmental science journals",
"Ecology journals",
"Environmental science journal stubs"
] |
11,026,058 | https://en.wikipedia.org/wiki/LG%20Black%20Zafiro%20%28MG810%29 | The LG MG810 (a.k.a. The LG Black Zafiro) is a mobile phone manufactured by LG Electronics. This phone is the GSM version of the phone commonly known as the Chocolate Flip. This clam shell style phone has touch sensitive music controls on the top, similar to the keypad used for navigation in the LG Chocolate series.
External links
Feature listing of the Black Zafiro
KE970
pt:LG Black Zafiro (MG810) | LG Black Zafiro (MG810) | [
"Technology"
] | 106 | [
"Mobile technology stubs",
"Mobile phone stubs"
] |
11,026,242 | https://en.wikipedia.org/wiki/Traumatic%20insemination | Traumatic insemination, also known as hypodermic insemination, is the mating practice in some species of invertebrates in which the male pierces the female's abdomen with his aedeagus and injects his sperm through the wound into her abdominal cavity (hemocoel). The sperm diffuses through the female's hemolymph, reaching the ovaries and resulting in fertilization.
The process is detrimental to the female's health. It creates an open wound which impairs the female until it heals, and is susceptible to infection. The injection of sperm and ejaculatory fluids into the hemocoel can also trigger an immune reaction in the female. Bed bugs, which reproduce solely by traumatic insemination, have evolved a pair of sperm-receptacles, known as the spermalege. It has been suggested that the spermalege reduces the direct damage to the female bed bug during traumatic insemination. However experiments found no conclusive evidence for that hypothesis; as of 2003, the preferred explanation for that organ is hygienic protection against bacteria.
The evolutionary origins of traumatic insemination are disputed. Although it evolved independently in many invertebrate species, traumatic insemination is most highly adapted and thoroughly studied in bed bugs, particularly Cimex lectularius. Traumatic insemination is not limited to male-female couplings, or even couplings of the same species. Both homosexual and inter-species traumatic inseminations have been observed.
Mechanics
In humans and other vertebrates, blood and lymph circulate in two different systems, the circulatory system and lymphatic system, which are enclosed by systems of capillaries, veins, arteries, and nodes. This is known as a closed circulatory system. Insects, however, have an open circulatory system in which blood and lymph circulate unenclosed, and mix to form a substance called hemolymph. All organs of the insect are bathed in hemolymph, which provides oxygen and nutrients to all of the insect's organs.
Following traumatic insemination, sperm can migrate through the hemolymph to the female's ovaries, resulting in fertilization. The exact mechanics vary from taxon to taxon. In some orders of insects, the male genitalia (paramere) enters the female's genital tract, and a spine at its tip pierces the wall of the female's bursa copulatrix. In others, the male penetrates the outer body wall. In either case, following penetration, the male ejaculates into the female. The sperm and ejaculatory fluids diffuse through the female's hemolymph. The insemination is successful if the sperm reach the ovaries and fertilize an ovum.
Female resistance to traumatic insemination varies from one species to another. Females from some genera, including Cimex, are passive prior to and during traumatic insemination. Females in other genera resist mating and attempt to escape. This resistance may not be an aversion to pain caused by the insemination, as observational evidence suggests that insects do not feel pain.
Research into the paternity of offspring produced by traumatic insemination has found "significant" last-sperm precedence. That is, the last male to traumatically inseminate a female tends to sire most of the offspring from that female.
Evolutionary adaptation
Many reasons for the evolutionary adaptation of traumatic insemination as a mating strategy have been suggested. One is that traumatic insemination is an adaptation to the development of the mating plug, a reproductive mechanism used by many species. Once a male finishes copulating, he injects a glutinous secretion into the female's reproductive tract, thereby "literally glu[ing] her genital tract closed". Traumatic insemination allows subsequent males to bypass the female's plugged genital tract, and inject sperm directly into her circulatory system.
Others have argued that the practice of traumatic insemination may have been an adaptation for males to circumvent female resistance to mating to eliminate courtship time, allowing one male to inseminate many mates when contact between them is brief; or that it evolved as a new development in the sperm competition as a means to deposit sperm as close to the ovaries as possible.
This bizarre method of insemination probably evolved as male bed bugs competed with each other to place their sperm closer and closer to the mother lode of eggs, the ovaries. Some male insects evolved long penises with which they enter the vagina but bypass the female's storage pouch and deposit their sperm further upstream close to the ovaries. A few males, notably among bed bugs, evolved traumatic insemination instead, and eventually this strange procedure became the norm among these insects.
It has recently been discovered that members of the plant bug genus Coridromius (Miridae) also practice traumatic insemination. In these bugs, the male intromittent organ is formed by the coupling of the aedeagus with the left paramere, as in bed bugs. Females also exhibit paragenital modifications at the site of intromission, which include grooves and invaginated copulatory tubes to guide the male paramere. The evolution of traumatic insemination in Coridromius represents a third independent emergence of this form of mating within the true bugs.
Health repercussions
While advantageous to the reproductive success of the individual male, traumatic insemination imposes a cost on females: reduced lifespan and decreased reproductive output. "These [costs] include (i) repair of the wound, (ii) leakage of blood, (iii) increased risk of infection through the puncture wound, and (iv) immune defence against sperm or accessory gland fluids that are introduced directly into the blood."
The male bed bug aedeagus has been shown to carry five (human) pathogenic microbes, and the exoskeleton of female bed bugs nine, including Penicillium chrysogenum, Staphylococcus saprophyticus, Stenotrophomonas maltophilia, Bacillus licheniformis, and Micrococcus luteus. Tests with blood agar have shown some of these species can survive in vivo. This suggests infections from these species may contribute to the increased mortality rate in bed bugs due to traumatic insemination.
The successive woundings each require energy to heal, leaving less energy available for other activities. Also, the wounds provide a possible point of infection which can reduce the female's lifespan. Once in the hemolymph, the sperm and ejaculatory fluids may act as antigens, triggering an immune reaction.
There is a tendency for dense colonies of bed bugs kept in laboratories to go extinct, starting with adult females. In such an environment, where mating occurs frequently, this high rate of adult female mortality suggests traumatic insemination is very detrimental to the female's health. The damage done, and the (unnecessarily) high mating rate of captive bed bugs, have been shown to cause a 25% higher-than-necessary mortality rate for females.
Bed bug adaptation
The effects of traumatic insemination are deleterious to the female. Female bed bugs have evolved a pair of specialized reproductive organs ("paragenitalia") at the site of penetration. Known as the ectospermalege and mesospermalege (referred to collectively as spermalege), these organs serve as sperm-receptacles from which sperm can migrate to the ovaries. All bed bug reproduction occurs via traumatic insemination and the spermalege. The genital tract, though functional, is used only for laying fertilized eggs.
The ectospermalege is a swelling in the abdomen, often folded, filled with hemocytes. The ectospermalege is visible externally in most bed bug species, giving the male a target through which to impale the female with the paramere. In species without an externally visible ectospermalege, traumatic insemination takes place over a wide range of the body surface.
The mesospermalege is a sac attached to the inner abdomen, under the ectospermalege. Sperm is injected through the male's aedeagus into the mesospermalege. In some species, the ectospermalege directly connects to the ovaries – thus, sperm and ejaculate never enters the hemolymph and thus never trigger an immune reaction. (The exact characteristics of the spermalege vary widely across different species of bed bugs.) The spermalege are generally found only in females. However, males in the genus Afrocimex possess an ectospermalege. Sperm remains in the spermalege for approximately four hours; after two days, none remains.
Male bed bugs have evolved chemoreceptors on their aedeagi. After impaling a female, the male can "taste" if a female has been recently mated. If he does, he will not copulate as long and will ejaculate less fluid into the female.
Use in the animal kingdom
Although traumatic insemination is most widely practiced among heteropterans (typical bugs), the phenomenon has been observed across a wide variety of other invertebrate taxa. These include:
Oxyurida (nematodes) – Traumatic insemination has been observed in pinworm genera including Auchenacantha, Citellina, Passalurus, and "probably" Austroxyris.
Acanthocephala (parasitic, thorny-headed worms) – The presence of mating plugs on the sides of Pomphorhynchus bulbocolli suggests traumatic insemination occurs in this species. Because these parasites cannot move after anchoring themselves to a host's intestine, traumatic insemination may have evolved to compensate for their immobility.
Rotifera (wheel animalcules) – In the genus Brachionus, the male pierces the syncytial integument (equivalent to skin) and injects sperm; in Asplanchna brightwelli the male secretes an enzyme which breaks down the female integument and injects sperm through the hole.
Turbellaria (free living flatworms) – Hermaphroditic flatworms reproduce by "penis fencing". Individuals "fence" with penises, attempting to use their penis to pierce the skin of the other and inject sperm. The 'loser' is the flatworm which is inseminated and must bear the energy costs of reproduction. One study of Pseudoceros bifurcus found "Most inseminations were unilateral. Even when reciprocal penis insertion could be achieved by the second partner, the first to inseminate obtained a longer injection time than the second." In another species, Macrostomum hystrix, the worm may also inject its sperm into its own head if other mates are not available.
Gastropod snails
Strepsiptera (twisted-winged parasites) – In Xenos vesparum, fertilization can occur either via extragenital ducts, or by traumatic insemination into the hemocoel.
Drosophila (fruit flies) – Ejaculates are injected through the body wall into the genital tract, not the abdomen.
Opisthobranchia (sea slugs) – Characterized by "repeated small injections into the dorsal surface of the partner, interrupted by synchronised circling movements", culminating in a standard genital insemination.
Harpactea (spiders) – The male of the spider species Harpactea sadistica pierces the female's body cavity and inseminates her ovaries directly.
Homosexual traumatic insemination
Traumatic insemination is not limited to male–female couplings. Male homosexual traumatic inseminations have been observed in the flower bug Xylocoris maculipennis and bed bugs of the genus Afrocimex.
In the genus Afrocimex, both species have well developed ectospermalege (but only females have a mesospermalege). The male ectospermalege is slightly different from that found in females, and amazingly enough, Carayon (1966) found that male Afrocimex bugs suffer actual homosexual traumatic inseminations. He found the male ectospermalege often showed characteristic mating scars, and histological studies showed "foreign" sperm were widely dispersed in the bodies of these homosexually mated males. Sperm cells of other males were, however, never found in or near the male reproductive tract. It therefore seems unlikely that sperm from other males could be inseminated when a male that has himself suffered traumatic insemination mates with a females. The costs and benefits, if any, of homosexual traumatic insemination in Afrocimex remain unknown.
Klaus Reinhardt of the University of Sheffield and colleagues observed two morphologically different kinds of spermalege in Afrocimex constrictus, a species in which both male and females are traumatically inseminated. They found females use sexual mimicry as a way to avoid traumatic insemination. In particular, they observed males, and females who had male spermalege structures, were inseminated less often than females with female spermalege structures.
In Xylocoris maculipennis, after a male traumatically inseminates another male, the injected sperm migrate to the testes. (The seminal fluid and most of the sperm are digested, giving the inseminated male a nutrient-rich meal.) It has been suggested, although there is no evidence, that when the inseminated male ejaculates into a female, the female receives both males' sperm.
Interspecies traumatic insemination
Cases of traumatic insemination between animals of different species will sometimes provoke a possibly lethal immune reaction. A female Cimex lectularius traumatically inseminated by a male C. hemipterus will swell up at the site of insemination as the immune system responds to male ejaculates. In the process, the female's lifespan is reduced. In some cases, this immune reaction can be so massive as to be almost immediately fatal. A female Hesperocimex sonorensis will swell up, blacken, and die within 24–48 hours after being traumatically inseminated by a male H. cochimiensis.
Similar mating practices
In the animal kingdom, traumatic insemination is not unique as a form of coercive sex. Research suggests that in the water beetle genus Acilius there is no courtship system between males and females. "It's a system of rape. But the females don't take things quietly. They evolve counter-weapons." Cited mating behaviors include males suffocating females underwater till exhausted, and allowing only occasional access to the surface to breathe for up to six hours (to prevent them breeding with other males), and females which have a variety of body shapes (to prevent males from gaining a grip). Foreplay is "limited to the female desperately trying to dislodge the male by swimming frantically around."
"Rape behavior" has been observed in a number of duck species. In the blue-winged teal, "rape attempts by paired males may occur at any time during the breeding season." Cited reasons for this being beneficial to the paired males include successful reproduction, and chasing away intruders from their territory. Bachelor herds of bottlenose dolphins will sometimes gang up on a female and coerce her to have sex with them, by swimming near her, chasing her if she attempts to escape, and making vocalized or physical threats. In the insect world, male water striders unable to penetrate her genital shield, will draw predators to a female until she copulates.
See also
Evolutionary arms race
Sexual conflict
Sexual cannibalism
References
External links
BBC article on traumatic insemination in the Harpactea sadistica spider, with video
Insect reproduction
Animal sexuality
Mating
Hemiptera | Traumatic insemination | [
"Biology"
] | 3,371 | [
"Behavior",
"Animals",
"Animal sexuality",
"Ethology",
"Sexuality",
"Mating"
] |
11,026,373 | https://en.wikipedia.org/wiki/Hydraulic%20engine%20house%2C%20Bristol%20Harbour | The Hydraulic engine house is part of the "Underfall Yard" in Bristol Harbour in Bristol, England.
The octagonal brick and terracotta chimney of the engine house dates from 1888, and is grade II* listed, as is the hydraulic engine house itself. It replaced the original pumping house which is now The Pump House public house. It is built of red brick with a slate roof and originally contained two steam engines made by the Worthington Corporation. These were compound surface condensing cylinders of diameter. These were replaced in 1907 by the current machines from Fullerton, Hodgart and Barclay of Paisley. It powered the docks' hydraulic system of cranes, bridges and locks until 2010.
Water is pumped from the harbour to a header tank and then fed by gravity to the high pressure pumps, where it is pressurised thence raising the external hydraulic accumulator. This stores the hydraulic energy ensuring a smooth delivery of pressure and meaning that the pumps do not need to be running the whole time nor be capable of supplying the instantaneous peak demands. The working pressure is 750 lbs/square inch. The external accumulator was added about 1954 when the original inside the building's tower became difficult to service (but it remains in place). The building originally contained a pair of steam powered pumps however these were replaced with three electrically driven ones in 1907. The engine house provided the power for equipment such as the lock gates and cranes until 2010.
The visitor centre in the hydraulic power house opened in time for Easter 2016.
See also
Grade II* listed buildings in Bristol
References
Engine houses
Hydraulic accumulators
Grade II* listed buildings in Bristol
Infrastructure completed in 1888
Bristol Harbourside
Grade II* listed industrial buildings | Hydraulic engine house, Bristol Harbour | [
"Physics"
] | 344 | [
"Physical systems",
"Hydraulic accumulators",
"Hydraulics"
] |
11,026,468 | https://en.wikipedia.org/wiki/Antigenic%20escape | Antigenic escape, immune escape, immune evasion or escape mutation occurs when the immune system of a host, especially of a human being, is unable to respond to an infectious agent: the host's immune system is no longer able to recognize and eliminate a pathogen, such as a virus. This process can occur in a number of different ways of both a genetic and an environmental nature. Such mechanisms include homologous recombination, and manipulation and resistance of the host's immune responses.
Different antigens are able to escape through a variety of mechanisms. For example, the African trypanosome parasites are able to clear the host's antibodies, as well as resist lysis and inhibit parts of the innate immune response. A bacterium, Bordetella pertussis, is able to escape the immune response by inhibiting neutrophils and macrophages from invading the infection site early on. One cause of antigenic escape is that a pathogen's epitopes (the binding sites for immune cells) become too similar to a person's naturally occurring MHC-1 epitopes, resulting in the immune system becoming unable to distinguish the infection from self-cells.
Antigenic escape is not only crucial for the host's natural immune response, but also for the resistance against vaccinations. The problem of antigenic escape has greatly deterred the process of creating new vaccines. Because vaccines generally cover a small ratio of strains of one virus, the recombination of antigenic DNA that lead to diverse pathogens allows these invaders to resist even newly developed vaccinations. Some antigens may even target pathways different from those the vaccine had originally intended to target. Recent research on many vaccines, including the malaria vaccine, has focused on how to anticipate this diversity and create vaccinations that can cover a broader spectrum of antigenic variation. On 12 May 2021, scientists reported to The United States Congress of the continuing threat of COVID-19 variants and COVID-19 escape mutations, such as the E484K virus mutation.
Mechanisms of evasion
Helicobacter pylori and homologous recombination
The most common of antigenic escape mechanisms, homologous recombination, can be seen in a wide variety of bacterial pathogens, including Helicobacter pylori, a bacterium that infects the human stomach. While a host's homologous recombination can act as a defense mechanisms for fixing DNA double stranded breaks (DSBs), it can also create changes in antigenic DNA that can create new, unrecognizable proteins that allow the antigen to escape recognition by the host's immune response. Through the recombination of H. pylori's outer membrane proteins, immunoglobulins can no longer recognize these new structures and, therefore, cannot attack the antigen as part of the normal immune response.
African trypanosomes
African trypanosomes are parasites that are able to escape the immune responses of its host animal through a range of mechanisms. Its most prevalent mechanism is its ability to evade recognition by antibodies through antigenic variation. This is achieved through the switching of its variant surface glycoprotein or VSG, a substance that coats the entire antigen. When this coat is recognized by an antibody, the parasite can be eliminated. However, variation of this coat can lead to antibodies being unable to recognize and eliminate the antigen. In addition to this, the VSG coat is able to clear the antibodies themselves to escape their clearing function.
Trypanosomes are also able to achieve evasion through the mediation of the host's immune response. Through the conversion of ATP to cAMP by the enzyme adenylate cyclase, the production of TNF-α, a signaling cytokine important for inducing inflammation, is inhibited in liver myeloid cells. In addition, trypanosomes are able to weaken the immune system by inducing B cell apoptosis (cell death) and the degradation of B cell lymphopoiesis. They are also able to induce suppressor molecules that can inhibit T cell reproduction.
Plant RNA viruses
Lafforgue et al 2011 found escape mutants in plant RNA viruses to be encouraged by coexistence of transgenic crops with artificial microRNA (amiR)-based resistance with fully susceptible individuals of the same crop, and even more so by coexistence with weakly amiR-producing transgenics.
Tumor escape
Many head and neck cancers are able to escape immune responses in a variety of ways. One such example is through the production of pro-inflammatory and immunosuppressive cytokines. This can be achieved when the tumor recruits immunosuppressive cell subsets into the tumor's environment. Such cells include pro-tumor M2 macrophages, myeloid-derived suppressor cells (MDSCs), Th-2 polarized CD4 T-lymphocytes, and regulatory T-lymphocytes. These cells can then limit the responses of T cells through the production of cytokines and by releasing immune-modulating enzymes. Additionally tumors can escape antigen-directed therapies by loss or down-regulation of the associated antigens, as well demonstrated after checkpoint blockade immunotherapy and CAR-T cell therapy though more recent data indicate that this may be prevented by localized bystander killing mediated by fasL/fas. Alternatively therapies can be developed to encompass multiple antigens in parallel.
Escape from vaccination
Consequences of recent vaccines
While vaccines are created to strengthen the immune response to pathogens, in many cases these vaccines are not able to cover the wide variety of strains a pathogen may have. Instead they may only protect against one or two strains, leading to the escape of strains not covered by the vaccine. This results in the pathogens being able to attack targets of the immune system different than those intended to be targeted by the vaccination. This parasitic antigen diversity is particularly troublesome for the development of the malaria vaccines.
Solutions to escape of vaccination
In order to solve this problem, vaccines must be able to cover the wide variety of strains within a bacterial population. In recent research of Neisseria meningitidis, the possibility of such broad coverage may be achieved through the combination of multi-component polysaccharide conjugate vaccines. However, in order to further improve upon broadening the scope of vaccinations, epidemiological surveillance must be conducted to better detect the variation of escape mutants and their spread.
See also
Viral strategies for immune response evasion
References
Cell biology
Vaccination
Phagocytes
Immune system
Human cells | Antigenic escape | [
"Biology"
] | 1,377 | [
"Immunology",
"Vaccination",
"Cell biology"
] |
11,026,603 | https://en.wikipedia.org/wiki/Geoforum | Geoforum is a peer-reviewed academic journal of geography which focuses on social, political, economic, and environmental activities that occur around the globe within the context of geographical space and time. It was originally published by Pergamon Press and is now published by Elsevier. The editors-in-chief are Rob Fletcher (Wageningen University & Research) and Sarah Hall (University of Nottingham). According to the Journal Citation Reports, the journal has a 2023 impact factor of 3.4.
References
External links
Geography journals
Environmental social science journals
Elsevier academic journals
10 times per year journals
Academic journals established in 1970
English-language journals | Geoforum | [
"Environmental_science"
] | 130 | [
"Environmental social science journals",
"Environmental science journals",
"Environmental social science stubs",
"Environmental science journal stubs",
"Environmental social science"
] |
11,026,728 | https://en.wikipedia.org/wiki/Evolved%20gas%20analysis | Evolved gas analysis (EGA) is a method used to study the gas evolved from a heated sample that undergoes decomposition or desorption. It is either possible just to detect evolved gases using evolved gas detection (EGD) or to analyse explicitly which gases evolved using evolved gas analysis (EGA). Therefore different analytical methods can be employed such as mass spectrometry, Fourier transform spectroscopy, gas chromatography, or optical in-situ evolved gas analysis.
By coupling the thermal analysis instrument, e. g. TGA (thermogravimetry) or DSC (differential scanning calorimetry), with a fast Quadrupole Mass Spectrometer (QMS) the detection of gas separation and identification of the separated components are possible in exact time correlation with the other thermal analysis signals. DSC/TGA-QMS or TGA-QMS yields information on the composition (mass numbers of elements and molecules) of the evolved gases. It allows fast and easy interpretation of atomic/inorganic vapors and standard gases like H2, H2O, CO2, etc. Fragmentation, interpretation of organic molecules is sometimes difficult.
The combination with an FTIR (Fourier transform infrared spectrometer) has become popular, especially in the polymer producing, chemical and pharmaceutical industry. DSC/TGA-FTIR or TGA-FTIR yields information on the composition (absorption bands) of the evolved gases (bonding conditions). The advantage is an easy interpretation (spectra data bases) of organic vapors without fragmentation. Symmetrical molecules can not be detected.
An EGA instrument named the Thermal and Evolved-Gas Analyzer was flown on the Phoenix Lander probe that reached Mars in May 2008. Its purpose was to study Martian soil samples.
An EGA instrument was contained within the Sample Analysis at Mars (SAM) instrument suite onboard Curiosity Rover which landed on Mars in 2012. The instrument's goal was to understand the habitability and past climates of Mars. SAM detected complex organic carbon on the surface of Mars at Gale Crater in a 3.5 billion year old mudstone.
References
Instrumental analysis | Evolved gas analysis | [
"Chemistry"
] | 433 | [
"Instrumental analysis",
"Analytical chemistry stubs"
] |
11,027,455 | https://en.wikipedia.org/wiki/List%20of%20cities%20claimed%20to%20be%20built%20on%20seven%20hills | The title City of Seven Hills usually refers to Rome, which was founded on seven hills. However, there are many other cities that make the same claim.
Africa
Ceuta, Spain
Ibadan, Nigeria – Built on Oke Padre, Oke Ado, Oke Bola, Oke Mapo, Oke Are, Oke Sapati, Oke Mokola.
Kampala, Uganda – The hills are Mengo, Lubaga, Namirembe, Old Kampala, Kibuli, Nakasero and Makerere
Yaoundé, Cameroon
Americas
Albany, New York
Athens, Texas
Asunción, Paraguay
Chicontepec, Mexico, whose name is Nahuatl for "on seven hills"
Cincinnati, Ohio (now encompasses more than seven)
Dubuque, Iowa
Ellicott City, Maryland
Guaranda, Ecuador
Kernersville, North Carolina
Lynchburg, Virginia – College Hill, Garland Hill, Daniel's Hill, Federal Hill, Diamond Hill, White Rock Hill, and Franklin Hill were the original "Seven Hills" of the City of Lynchburg.
Nevada City, California – Built upon Aristocracy Hill, American Hill, Piety Hill, Prospect Hill, Wet Hill, Cement Hill, and Lost Hill. There is also a middle school and business district called Seven Hills.
Newton, Massachusetts
Nixa, Missouri
Port Washington, Wisconsin – The seven hills comprise two High School Hills, North Bluff Hill, South Hill, St. Mary's Hill, Billy Goat Hill and Sweet Cake Hill.
Pottsville, Pennsylvania – Built on Lawton's Hill, Greenwood Hill, Bunker Hill (Sharp Mountain), Guinea Hill, Forest Hills, Cottage Hill and Mount Hope.
Providence, Rhode Island – Built on Christian Hill, College Hill, Constitution Hill, Federal Hill, Smith Hill, Tockwotten Hill, and Weybosset Hill.
Richmond, Virginia – Built on numerous hills and escarpments to include Union Hill, Church Hill, Council Chamber Hill, Shockoe Hill, Gambles Hill, Navy Hill and Oregon Hill.
Rome, Georgia
Saint Paul, Minnesota – The exact list of seven hills varies, but every list includes Cathedral Hill, Capitol Hill, Dayton's Bluff, Crocus Hill (sometimes also called St. Clair), and Williams Hill—which is no longer a hill.
Valera, Trujillo, Venezuela
San Francisco, California (see List of hills in San Francisco)
Seattle, Washington (see Seven hills of Seattle)
Seven Hills, Ohio
Somerville, Massachusetts – Built on Clarendon Hill, College Hill, Spring Hill, Winter Hill, Central Hill, Plowed Hill, Cobble Hill.
Staten Island, New York – Fort Hill, Ward Hill, Fox Hill, Grymes Hill, Emerson Hill, Todt Hill, and Richmond Hill.
Tallahassee, Florida – Goodwood Plantation, Old Fort Park, Mission San Luis, Old Capitol, The Grove, FAMU (Lee Hall), FSU (Westcott Hall)
Victoria, Argentina
Washington, D.C. – Built on Capitol Hill, Meridian Hill, Floral Hills, Forest Hills, Hillbrook, Hillcrest, and Knox Hill.
Worcester, Massachusetts – Built on Pakachoag (Mount St. James), Sagatabscot (Union Hill), Hancock Hill, Chandler Hill (Belmosy Hill), Green Hill, Bancroft Hill, and Newton Hill
Yonkers, New York
Eurasia
Asia
Shimla, India – The seven hills are Jakhu Hill, Summer Hill, Bantony Hill, Inveram Hill, Elisium Hill, Observatory Hill and Prospect Hill.
Amman, Jordan: the seven hills are Qusur, Jufa, Taj, Nazha, Nasser, Natheef, and al-Akhdar.
Bhopal, India
Jerusalem – Jerusalem's seven hills are Mount Scopus, Mount Olivet and the Mount of Corruption (all three are peaks in a mountain ridge that lies east of the Old City), Mount Ophel, the original Mount Zion, the New Mount Zion and the hill on which the Antonia Fortress was built.
Shefa-Amr
Macau
Mecca, Saudi Arabia
Mumbai Saat Dweep Samuh (now joined into a peninsula)
Tehran, Iran
Kottayam, India
Thiruvananthapuram, India
Tirumala, India – One of the hill towns of Tirumala is where the Temple of Seven Hills, the Tirumala Venkateswara, is located. This temple is one of the most active places of worship in the world.
Europe
Aalten, Netherlands – Village (no town privileges) said to be built on seven hills.
Abergavenny, South Wales, United Kingdom
Armagh, in Northern Ireland, United Kingdom
Athens, Greece. The historical seven hills of Athens are Acropolis, Areopagus, Philopappou, Hill of the Nymphs, Pnyx, Lycabettus, and Tourkovounia.
Bamberg, Bavaria, Germany, The seven hills of Bamberg are; Cathedral Hill, Michaelsberg, Kaulberg/Obere Pfarre, Stefansberg, Jakobsberg, Altenburger Hill, and Abtsberg.
Barcelona, Catalonia, Spain, is said to be built on Turó del Carmel, Turó de la Rovira, Turó de la Creueta del Coll, Turó de la Peira, Turó del Putxet, Turó de Monterols and Turó de Modolell. Others exclude the latter and include Montjuïc and Mont Tàber, the 17 m hill where the Roman city of Barcino was built.
Bath, England, United Kingdom
Besançon, France
Bergamo, Lombardy, Italy
Bergen, Norway – Built not on but between seven mountains. See Seven Mountains, Bergen.
Bristol, England, United Kingdom
Brussels, Belgium – Said to be built on St. Michielsberg, Koudenberg, Warmoesberg, Kruidtuin, Kunstberg, Zavel and St. Pietersberg
Bucharest, Romania
Cagliari, Sardinia, Italy
Cáceres, Spain
Cherdyn, Russia
Chișinău, Moldova
Cosenza, Calabria, Italy
Durham, England, United Kingdom
Edinburgh, Scotland, United Kingdom (see Hills of Edinburgh)
Gorzów Wielkopolski, Poland
Iaşi, Romania (see Seven hills of Iaşi)
Istanbul, Turkey (see Seven hills of Istanbul)
Kaposvár, Hungary
Kyiv, Ukraine Borichev, Shchekovitsa, Starokievska and Khorevitsa.
Lisbon, Portugal, São Jorge, São Vicente, Sant'Ana, Santo André, Chagas, Santa Catarina, São Roque
Lviv, Ukraine
Liverpool, UK
Madrid, Spain
Maribor, Slovenia, seven hills are Pohorje, Kozjak, Kalvarija, Mestni vrh, Piramida, Meljski hrib and Pekrska gorca.
Moscow, Russia (See Seven hills of Moscow)
Nijmegen, Netherlands – Seven hills within the 16c–19c city wall: Geertruidsberg, Hofberg (Valkhof), Lindenberg, Jansberg, Hundisberg, Hessenberg and a) Marienberg or b) Hoofdberg.
Nitra, Slovakia
Plovdiv, Bulgaria – Was originally built on seven hills but now only has six due to one being destroyed in the early 20th century (Markovo tepe)
Plymouth, England, United Kingdom
Prague, Czech Republic – Said to be built on seven or nine hills: Hradčany, Vítkov, (Opyš), Větrov, Skalka, (Emauzy), Vyšehrad, Karlov and Petřín
Pula, Croatia
Rome, Lazio, Italy (see Seven hills of Rome)
Saint-Étienne, France
Sandomierz, Poland
Sheffield, England, United Kingdom
Siegen, Germany
Smolensk, Russia
Telšiai, Lithuania
Toulon, France
Tulle, France
Turku, Finland
Ufa, Russia
Veszprém, Hungary
Vilnius, Lithuania
Vladimir, Russia
Zevenbergen, Netherlands, Oronyms unknown, except Molenberg
Oceania
Brisbane, (Seven Hills), Australia
Western Sydney, Australia – The seven or eight hills are found in Sydney's northwestern suburbs: Castle Hill, Baulkham Hills, Rooty Hill, Seven Hills, Prospect Hill, Beaumont Hills, Rouse Hill and Constitution Hill.
See also
Seven Hills
Revelation 17 – Mentions a beast on seven hills
References
Seven hills
Urban planning | List of cities claimed to be built on seven hills | [
"Engineering"
] | 1,751 | [
"Urban planning",
"Architecture"
] |
11,027,695 | https://en.wikipedia.org/wiki/Page%20attribute%20table | The page attribute table (PAT) is a processor supplementary capability extension to the page table format of certain x86 and x86-64 microprocessors. Like memory type range registers (MTRRs), they allow for fine-grained control over how areas of memory are cached, and are a companion feature to the MTRRs.
Unlike MTRRs, which provide the ability to manipulate the behavior of caching for a limited number of fixed physical address ranges, Page Attribute Tables allow for such behavior to be specified on a per-page basis, greatly increasing the ability of the operating system to select the most efficient behavior for any given task.
Processors
The PAT is available on Pentium III and newer CPUs, and on non-Intel CPUs.
See also
Write-combining
References
External links
Intel 64 and IA-32 Architectures Software Developer's Manual Volume 3A: System Programming Guide, Part 1 see chapter 11, section 12.
Virtual memory
X86 architecture | Page attribute table | [
"Technology"
] | 200 | [
"Computing stubs"
] |
11,027,904 | https://en.wikipedia.org/wiki/Epsilon%20calculus | In logic, Hilbert's epsilon calculus is an extension of a formal language by the epsilon operator, where the epsilon operator substitutes for quantifiers in that language as a method leading to a proof of consistency for the extended formal language. The epsilon operator and epsilon substitution method are typically applied to a first-order predicate calculus, followed by a demonstration of consistency. The epsilon-extended calculus is further extended and generalized to cover those mathematical objects, classes, and categories for which there is a desire to show consistency, building on previously-shown consistency at earlier levels.
Epsilon operator
Hilbert notation
For any formal language L, extend L by adding the epsilon operator to redefine quantification:
The intended interpretation of ϵx A is some x that satisfies A, if it exists. In other words, ϵx A returns some term t such that A(t) is true, otherwise it returns some default or arbitrary term. If more than one term can satisfy A, then any one of these terms (which make A true) can be chosen, non-deterministically. Equality is required to be defined under L, and the only rules required for L extended by the epsilon operator are modus ponens and the substitution of A(t) to replace A(x) for any term t.
Bourbaki notation
In tau-square notation from N. Bourbaki's Theory of Sets, the quantifiers are defined as follows:
where A is a relation in L, x is a variable, and juxtaposes a at the front of A, replaces all instances of x with , and links them back to . Then let Y be an assembly, (Y|x)A denotes the replacement of all variables x in A with Y.
This notation is equivalent to the Hilbert notation and is read the same. It is used by Bourbaki to define cardinal assignment since they do not use the axiom of replacement.
Defining quantifiers in this way leads to great inefficiencies. For instance, the expansion of Bourbaki's original definition of the number one, using this notation, has length approximately 4.5 × 1012, and for a later edition of Bourbaki that combined this notation with the Kuratowski definition of ordered pairs, this number grows to approximately 2.4 × 1054.
Modern approaches
Hilbert's program for mathematics was to justify those formal systems as consistent in relation to constructive or semi-constructive systems. While Gödel's results on incompleteness mooted Hilbert's Program to a great extent, modern researchers find the epsilon calculus to provide alternatives for approaching proofs of systemic consistency as described in the epsilon substitution method.
Epsilon substitution method
A theory to be checked for consistency is first embedded in an appropriate epsilon calculus. Second, a process is developed for re-writing quantified theorems to be expressed in terms of epsilon operations via the epsilon substitution method. Finally, the process must be shown to normalize the re-writing process, so that the re-written theorems satisfy the axioms of the theory.
Notes
References
Systems of formal logic
Proof theory | Epsilon calculus | [
"Mathematics"
] | 643 | [
"Mathematical logic",
"Proof theory"
] |
11,027,961 | https://en.wikipedia.org/wiki/Volvelle | A volvelle or wheel chart is a type of slide chart, a paper construction with rotating parts. It is considered an early example of a paper analog computer. Volvelles have been produced to accommodate organization and calculation in many diverse subjects. Early examples of volvelles are found in the pages of astronomy books. They can be traced back to "certain Arabic treatises on humoral medicine" and to the Persian astronomer, Abu Rayhan Biruni (c. 1000), who made important contributions to the development of the volvelle.
In the twentieth century, the volvelle had many diverse uses. In Reinventing the Wheel, author Jessica Helfand introduces twentieth-century volvelles with this:
The rock band Led Zeppelin employed a volvelle in the sleeve design for the album Led Zeppelin III (1970).
Two games from the game company Infocom included volvelles inside their package as "feelies": Sorcerer (1983) and A Mind Forever Voyaging (1985). Both volvelles served to impede copying of the games, because they contained information needed to play the game.
See also
E6B
Ramon Llull
Planisphere
Pop-up book
Slide chart
Zairja
References
Further reading
Eye, No. 41, Vol. 11, edited by John L. Walters, Quantum Publishing, Autumn 2001.
Lindberg, Sten G. "Mobiles in Books: Volvelles, Inserts, Pyramids, Divinations, and Children's Games". The Private Library, 3rd series 2.2 (1979): 49.
An exhibition of volvelles at New York's Grolier Club.
Analog computers
Astronomical instruments
Communication design
Graphic design | Volvelle | [
"Astronomy",
"Engineering"
] | 351 | [
"Design",
"Communication design",
"Astronomical instruments"
] |
11,027,988 | https://en.wikipedia.org/wiki/Slide%20chart | A slide chart is a hand-held device, usually of paper, cardboard, or plastic, for conducting simple calculations or looking up information.
A circular slide chart is sometimes referred to as a wheel chart or volvelle.
Unlike other hand-held mechanical calculating devices such as slide rules and addiators, which have been replaced by electronic calculators and computer software, wheel charts and slide charts have survived to the present time. There are a number of companies who design and manufacture these devices.
Unlike the general-purpose mechanical calculators, slide charts are typically devoted to carrying out a particular specialized calculation, or displaying information on a single product or a particular process. For example, the "CurveEasy" wheel chart displays information related to spherical geometry calculations, and the Prestolog calculator is used for cost/profit calculations. Another example of a wheel chart is the planisphere, which shows the location of stars in the sky for a given location, date, and time.
Slide charts are often associated with particular sports, political campaigns or commercial companies. For example, a pharmaceutical company may create wheel charts printed with their company name and product information for distribution to medical practitioners.
Slide charts are common collectables.
See also
The E6B aviation flight computing device, still in regular use
References
Reinventing the Wheel, Jessica Helfand, Princeton Architectural Press, 2002. ()
External links
Slide Chart Examples
Communication design | Slide chart | [
"Engineering"
] | 292 | [
"Design stubs",
"Design",
"Communication design"
] |
11,028,411 | https://en.wikipedia.org/wiki/Jordan%20and%20Einstein%20frames | The Lagrangian in scalar-tensor theory can be expressed in the Jordan frame or in the Einstein frame, which are field variables that stress different aspects of the gravitational field equations and the evolution equations of the matter fields. In the Jordan frame the scalar field or some function of it multiplies the Ricci scalar in the Lagrangian and the matter is typically coupled minimally to the metric, whereas in the Einstein frame the Ricci scalar is not multiplied by the scalar field and the matter is coupled non-minimally. As a result, in the Einstein frame the field equations for the space-time metric resemble the Einstein equations but test particles do not move on geodesics of the metric. On the other hand, in the Jordan frame test particles move on geodesics, but the field equations are very different from Einstein equations. The causal structure in both frames is always equivalent and the frames can be transformed into each other as convenient for the given application.
Christopher Hill and Graham Ross have shown that there exist ``gravitational contact terms" in the Jordan frame, whereby the action is modified by graviton exchange. This modification leads back to the Einstein frame as the effective theory.
Contact interactions arise in Feynman diagrams when a vertex contains a power of the exchanged momentum, , which then cancels against the Feynman propagator, , leading to a point-like interaction. This must be included as part of the effective action of the theory. When the contact term is included results for amplitudes in the Jordan frame will be equivalent to those in the Einstein frame, and
results of physical calculations in the Jordan frame that omit the contact terms will generally be incorrect. This implies that the Jordan frame action is misleading, and the Einstein frame is uniquely correct for fully representing the physics.
Equations and physical interpretation
If we perform the Weyl rescaling , then the Riemann and Ricci tensors are modified as follows.
As an example consider the transformation of a simple Scalar-tensor action with an arbitrary set of matter fields coupled minimally to the curved background
The tilde fields then correspond to quantities in the Jordan frame and the fields without the tilde correspond to fields in the Einstein frame. See that the matter action changes only in the rescaling of the metric.
The Jordan and Einstein frames are constructed to render certain parts of physical equations simpler which also gives the frames and the fields appearing in them particular physical interpretations. For instance, in the Einstein frame, the equations for the gravitational field will be of the form
I.e., they can be interpreted as the usual Einstein equations with particular sources on the right-hand side. Similarly, in the Newtonian limit one would recover the Poisson equation for the Newtonian potential with separate source terms.
However, by transforming to the Einstein frame the matter fields are now coupled not only to the background but also to the field which now acts as an effective potential. Specifically, an isolated test particle will experience a universal four-acceleration
where is the particle four-velocity. I.e., no particle will be in free-fall in the Einstein frame.
On the other hand, in the Jordan frame, all the matter fields are coupled minimally to and isolated test particles will move on geodesics with respect to the metric . This means that if we were to reconstruct the Riemann curvature tensor by measurements of geodesic deviation, we would in fact obtain the curvature tensor in the Jordan frame. When, on the other hand, we deduce on the presence of matter sources from gravitational lensing from the usual relativistic theory, we obtain the distribution of the matter sources in the sense of the Einstein frame.
Models
Jordan frame gravity can be used to calculate type IV singular bouncing cosmological evolution, to derive the type IV singularity.
See also
Albert Einstein
Pascual Jordan
References
Valerio Faraoni, Edgard Gunzig, Pasquale Nardone, Conformal transformations in classical gravitational theories and in cosmology, Fundam. Cosm. Phys. 20(1999):121, .
Eanna E. Flanagan, The conformal frame freedom in theories of gravitation, Class. Q. Grav. 21(2004):3817, .
General relativity
Tensors | Jordan and Einstein frames | [
"Physics",
"Engineering"
] | 877 | [
"General relativity",
"Tensors",
"Relativity stubs",
"Theory of relativity"
] |
11,028,991 | https://en.wikipedia.org/wiki/Essential%20fructosuria | Essential fructosuria, caused by a deficiency of the enzyme hepatic fructokinase, is a clinically benign condition characterized by the incomplete metabolism of fructose in the liver, leading to its excretion in urine. Fructokinase (sometimes called ketohexokinase) is the first enzyme involved in the degradation of fructose to fructose-1-phosphate in the liver.
This defective degradation does not cause any clinical symptoms, fructose is either excreted unchanged in the urine or metabolized to fructose-6-phosphate by alternate pathways in the body, most commonly by hexokinase in adipose tissue and muscle.
Signs and symptoms
Cause
Essential fructosuria is a genetic condition that is inherited in an autosomal recessive manner. Mutations in the KHK gene, located on chromosome 2p23.3-23.2 are responsible. The incidence of essential fructosuria has been estimated at 1:130,000. The actual incidence is likely higher, because those affected are asymptomatic.
Diagnosis
A diagnosis of essential fructosuria is typically made after a positive routine test for reducing sugars in the urine. An additional test with glucose oxidase must also be carried out (with a negative result indicating essential fructosuria) as a positive test for reducing sugars is most often a result of glucosuria secondary to diabetes mellitus. The excretion of fructose in the urine is not constant, it depends largely on dietary intake.
Treatment
No treatment is indicated for essential fructosuria, while the degree of fructosuria depends on the dietary fructose intake, it does not have any clinical manifestations. The amount of fructose routinely lost in urine is quite small. Other errors in fructose metabolism have greater clinical significance. Hereditary fructose intolerance, or the presence of fructose in the blood (fructosemia), is caused by a deficiency of aldolase B, the second enzyme involved in the metabolism of fructose.
This enzyme deficiency results in an accumulation of fructose-1-phosphate, which inhibits the production of glucose and results in diminished regeneration of adenosine triphosphate. Clinically, patients with hereditary fructose intolerance are much more severely affected than those with essential fructosuria, with elevated uric acid, growth abnormalities and can result in coma if untreated.
References
External links
Autosomal recessive disorders
Inborn errors of carbohydrate metabolism | Essential fructosuria | [
"Chemistry"
] | 538 | [
"Inborn errors of carbohydrate metabolism",
"Carbohydrate metabolism"
] |
11,029,615 | https://en.wikipedia.org/wiki/Value%20change%20dump | Value change dump (VCD) (also known less commonly as "variable change dump") is an ASCII-based format for dumpfiles generated by EDA logic simulation tools. The standard, four-value VCD format was defined along with the Verilog hardware description language by the IEEE Standard 1364-1995 in 1996. An extended VCD format defined six years later in the IEEE standard 1364-2001 supports the logging of signal strength and directionality. The simple and yet compact structure of the VCD format has allowed its use to become ubiquitous and to spread into non-Verilog tools such as the VHDL simulator GHDL and various kernel tracers. A limitation of the format is that it is unable to record the values that are stored in memories.
Structure/syntax
The VCD file comprises a header section with date, simulator, and timescale information; a variable definition section; and a value change section, in that order. The sections are not explicitly delineated within the file, but are identified by the inclusion of keywords belonging to each respective section.
VCD keywords are marked by a leading $; in general every keyword starts a command which is terminated by an explicit $end. Variable identifiers may also start with a $, but these may be distinguished by context.
All VCD tokens are delineated by whitespace. Data in the VCD file is case sensitive.
Header section
The header section of the VCD file includes a timestamp, a simulator version number, and a timescale, which maps the time increments listed in the value change section to simulation time units.
Variable definition section
The variable definition section of the VCD file contains scope information as well as lists of signals instantiated in a given scope.
Each variable is assigned an arbitrary identifier for use in the value change section. The identifier is composed of one or more printable ASCII characters from ! to ~ (decimal 33 to 126), these are conventionally kept short (i.e. one or two characters). Several variables can share an identifier if the simulator determines that they will always have the same value, i.e. are the same wire in the scope of the overall netlist.
The scope type definitions closely follow Verilog concepts, and include the types module, task, function, and fork.
$dumpvars section
The section beginning with $dumpvars keyword contains initial values of all variables dumped.
Value change section
The value change section contains a series of time-ordered value changes for the signals in a given simulation model. The current time is indicated by '#' followed by the timestamp. For scalar (single bit) signal the format is signal value denoted by 0 or 1 followed immediately by the signal identifier with no space between the value and the signal identifier. For vector (multi-bit) signals the format is signal value denoted by letter 'b' or 'B' followed by the value in binary format followed by space and then the signal identifier. Value for real variables is denoted by letter 'r' or 'R' followed by the data using %.16g printf() format followed by space and then the variable identifier.
Example VCD file
$date
Date text. For example: November 11, 2009.
$end
$version
VCD generator tool version info text.
$end
$comment
Any comment text.
$end
$timescale 1ps $end
$scope module logic $end
$var wire 8 # data $end
$var wire 1 $ data_valid $end
$var wire 1 % en $end
$var wire 1 & rx_en $end
$var wire 1 ' tx_en $end
$var wire 1 ( empty $end
$var wire 1 ) underrun $end
$upscope $end
$enddefinitions $end
$dumpvars
bxxxxxxxx #
x$
0%
x&
x'
1(
0)
$end
#0
b10000001 #
0$
1%
0&
1'
0(
0)
#2211
0'
#2296
b0 #
1$
#2302
0$
#2303
The code above defines 7 signals by using $var:
$var type bitwidth id name
The id is used later on the value change dump. The value change dump starts after $enddefinitions $end and is based on timestamps. Timestamp is denoted as '#' followed by number. On each timestamp the list of signals that change their value is listed. This is done by the value/id pair:
new_value id
This example will be displayed as
See also
Waveform viewer
External links
IEEE Std 1364-2001 – The official standard for Verilog 2001 (not free, includes chapter defining VCD).
Writing your own VCD File – Informal but comprehensive reference.
Value Change Dump – Explanation of VCD format, with example.
Compare VCD – A command-line tool to compare VCD files (licensed under the GPL).
Verilog::VCD – Perl CPAN software for parsing Verilog VCD files (licensed under the GPL).
Verilog_VCD – Translated into Python from Perl CPAN software
ProcessVCD – Java package for parsing VCD files (licensed under the MIT License).
PyVCD – Python package that writes Value Change Dump (VCD) files as specified in IEEE 1364-2005 (MIT License).
vcdMaker – Tool (Linux, Windows) for translating text log files into VCD files (MIT License).
yne/vcd – (Linux, Mac, Windows) CLI to Display VCD files on the terminal (MIT License).
IEEE standards
EDA file formats | Value change dump | [
"Technology"
] | 1,209 | [
"Computer standards",
"IEEE standards"
] |
11,030,945 | https://en.wikipedia.org/wiki/Limit%20comparison%20test | In mathematics, the limit comparison test (LCT) (in contrast with the related direct comparison test) is a method of testing for the convergence of an infinite series.
Statement
Suppose that we have two series and with for all .
Then if with , then either both series converge or both series diverge.
Proof
Because we know that for every there is a positive integer such that for all we have that , or equivalently
As we can choose to be sufficiently small such that is positive.
So and by the direct comparison test, if converges then so does .
Similarly , so if diverges, again by the direct comparison test, so does .
That is, both series converge or both series diverge.
Example
We want to determine if the series converges. For this we compare it with the convergent series
As we have that the original series also converges.
One-sided version
One can state a one-sided comparison test by using limit superior. Let for all . Then if with and converges, necessarily converges.
Example
Let and for all natural numbers . Now
does not exist, so we cannot apply the standard comparison test. However,
and since converges, the one-sided comparison test implies that converges.
Converse of the one-sided comparison test
Let for all . If diverges and converges, then necessarily
, that is,
. The essential content here is that in some sense the numbers are larger than the numbers .
Example
Let be analytic in the unit disc and have image of finite area. By Parseval's formula the area of the image of is proportional to . Moreover,
diverges. Therefore, by the converse of the comparison test, we have
, that is,
.
See also
Convergence tests
Direct comparison test
References
Further reading
Rinaldo B. Schinazi: From Calculus to Analysis. Springer, 2011, , pp. 50
Michele Longo and Vincenzo Valori: The Comparison Test: Not Just for Nonnegative Series. Mathematics Magazine, Vol. 79, No. 3 (Jun., 2006), pp. 205–210 (JSTOR)
J. Marshall Ash: The Limit Comparison Test Needs Positivity. Mathematics Magazine, Vol. 85, No. 5 (December 2012), pp. 374–375 (JSTOR)
External links
Pauls Online Notes on Comparison Test
Convergence tests
Articles containing proofs | Limit comparison test | [
"Mathematics"
] | 477 | [
"Theorems in mathematical analysis",
"Convergence tests",
"Articles containing proofs"
] |
11,031,482 | https://en.wikipedia.org/wiki/Topological%20derivative | The topological derivative is, conceptually, a derivative of a shape functional with respect to infinitesimal changes in its topology, such as adding an infinitesimal hole or crack. When used in higher dimensions than one, the term topological gradient is also used to name the first-order term of the topological asymptotic expansion, dealing only with infinitesimal singular domain perturbations. It has applications in shape optimization, topology optimization, image processing and mechanical modeling.
Definition
Let be an open bounded domain of , with , which is subject to a nonsmooth perturbation confined in a small region of size with an arbitrary point of and a fixed domain of . Let be a characteristic function associated to the unperturbed domain and be a characteristic function associated to the perforated domain . A given shape functional associated to the topologically perturbed domain, admits the following topological asymptotic expansion:
where is the shape functional associated to the reference domain, is a positive first order correction function of and is the remainder. The function is called the topological derivative of at .
Applications
Structural mechanics
The topological derivative can be applied to shape optimization problems in structural mechanics. The topological derivative can be considered as the singular limit of the shape derivative. It is a generalization of this classical tool in shape optimization. Shape optimization concerns itself with finding an optimal shape. That is, find to minimize some scalar-valued objective function, . The topological derivative technique can be coupled with level-set method.
In 2005, the topological asymptotic expansion for the Laplace equation with respect to the insertion of a short crack inside a plane domain had been found. It allows to detect and locate cracks for a simple model problem: the steady-state heat equation with the heat flux imposed and the temperature measured on the boundary. The topological derivative had been fully developed for a wide range of second-order differential operators and in 2011, it had been applied to Kirchhoff plate bending problem with a fourth-order operator.
Image processing
In the field of image processing, in 2006, the topological derivative has been used to perform edge detection and image restoration. The impact of an insulating crack in the domain is studied. The topological sensitivity gives information on the image edges. The presented algorithm is non-iterative and thanks to the use of spectral methods has a short computing time. Only operations are needed to detect edges, where is the number of pixels. During the following years, other problems have been considered: classification, segmentation, inpainting and super-resolution. This approach can be applied to gray-level or color images. Until 2010, isotropic diffusion was used for image reconstructions. The topological gradient is also able to provide edge orientation and this information can be used to perform anisotropic diffusion.
In 2012, a general framework is presented to reconstruct an image given some noisy observations in a Hilbert space where is the domain where the image is defined. The observation space depends on the specific application as well as the linear observation operator . The norm on the space is . The idea to recover the original image is to minimize the following functional for :
where is a positive definite tensor. The first term of the equation ensures that the recovered image is regular, and the second term measures the discrepancy with the data.
In this general framework, different types of image reconstruction can be performed such as
image denoising with and ,
image denoising and deblurring with and with a motion blur or Gaussian blur,
image inpainting with and , the subset is the region where the image has to be recovered.
In this framework, the asymptotic expansion of the cost function in the case of a crack provides the same topological derivative where is the normal to the crack and a constant diffusion coefficient. The functions and are solutions of the following direct and adjoint problems.
in and on
in and on
Thanks to the topological gradient, it is possible to detect the edges and their orientation and to define an appropriate for the image reconstruction process.
In image processing, the topological derivatives have also been studied in the case of a multiplicative noise of gamma law or in presence of Poissonian statistics.
Inverse problems
In 2009, the topological gradient method has been applied to tomographic reconstruction. The coupling between the topological derivative and the level set has also been investigated in this application. In 2023, topological derivative was used to optimize shapes for inverse rendering.
References
Books
A. A. Novotny and J. Sokolowski, Topological derivatives in shape optimization, Springer, 2013.
External links
Allaire and al. Structural optimization using topological and shape sensitivity via a level set method
Mathematical optimization | Topological derivative | [
"Mathematics"
] | 957 | [
"Mathematical optimization",
"Mathematical analysis"
] |
11,032,409 | https://en.wikipedia.org/wiki/Ternium | Ternium S.A. is a manufacturer of flat and long steel products with production centers in Argentina, Brazil, Mexico, Guatemala, Colombia, and the United States. Ternium owns a 51.5% interest in Usiminas of Brazil. The company has an annual production capacity of 15.4 million tons. In 2023, 55% of its sales were from Mexico; 21% of sales were from Argentina; Bolivia, Chile, Paraguay and Uruguay; 13% of sales were from Brazil; and 11% of sales were from the United States, Colombia and Central America.
Approximately 21% of the company is publicly-traded; the remainder is controlled by San Faustin S.A., which is in turn controlled by Rocca & Partners Stichting Administratiekantoor Aandelen San Faustin, a Stichting.
The company takes its name from the Latin words Ter (three) and Eternium (eternal) in reference to the integration of the three steel mills.
History
Ternium was formed in 2005 by the consolidation of three companies: Siderar of Argentina, Sidor of Venezuela and Hylsa of Mexico.
In 2006, Ternium was listed on the New York Stock Exchange.
In 2007, Ternium acquired Grupo IMSA, expanding its operations into Guatemala and the United States.
In April 2008, after a series of worker disputes over pay which led to strike actions, Sidor was nationalized by the government of Venezuela. In May 2009, compensation of US$1.65 billion was paid for Ternium's 59.7% stake in Sidor.
In August 2010, Ternium acquired a 54% interest in Ferrasa, and in April 2015, Ternium acquired the remainder of the company, which was renamed Ternium Colombia.
In 2017, Ternium acquired CSA Siderúrgica do Atlântico for €1.4 billion and renamed it Ternium Brazil.
In July 2023, Ternium increased its ownership in Usiminas to 51.5%.
References
External links
Companies based in Luxembourg City
Companies listed on the Buenos Aires Stock Exchange
Companies listed on the New York Stock Exchange
Iron and steel mills
Luxembourgian companies established in 2005
Manufacturing companies established in 2005
Manufacturing companies of Argentina
Manufacturing companies of Mexico
Steel companies of Luxembourg
Techint | Ternium | [
"Chemistry"
] | 464 | [
"Iron and steel mills",
"Metallurgical facilities"
] |
11,032,761 | https://en.wikipedia.org/wiki/Stream%20pool | A stream pool, in hydrology, is a stretch of a river or stream in which the water depth is above average and the water velocity is below average.
Formation
A stream pool may be bedded with sediment or armoured with gravel, and in some cases the pool formations may have been formed as basins in exposed bedrock formations. Plunge pools, or plunge basins, are stream pools formed by the action of waterfalls. Pools are often formed on the outside of a bend in a meandering river.
Dynamics
The depth and lack of water velocity often leads to stratification in stream pools, especially in warmer regions. In warm arid regions of the Western United States, surface waters were found to be 3–9 °C higher than those at the bottom
Habitat
This portion of a stream often provides a specialized aquatic ecosystem habitat for organisms that have difficulty feeding or navigating in swifter reaches of the stream or in seasonally warmer water. Such pools can be important fish habitat, especially where many streams reach high summer temperatures and very low-flow dry season characteristics. In warm and arid regions, the stratification of stream pools provide cooler water for fish that prefer low water temperatures, such as the redband trout (Oncorhynchus mykiss) in the Western United States. Mosquito larvae, which prefer still and often stagnant water, can be found in stream pools due to the low water velocity.
See also
Pond
Reach (geography)
Riffle
Stream gradient
List of waterfalls by flow rate
List of waterfalls by type
Notes
External links
USGS: Stream Modeling website
Water streams
Hydrology
Bodies of water | Stream pool | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 322 | [
"Hydrology",
"Environmental engineering"
] |
11,033,300 | https://en.wikipedia.org/wiki/BASIX | BASIX or Building Sustainability Index is a scheme introduced by the government of New South Wales, Australia on 1 July 2004 to regulate the energy efficiency of residential buildings. It offers an online assessment tool for rating the expected performance of any residential development in terms of water efficiency, thermal comfort and energy usage. It aims to reduce water consumption and greenhouse gas emissions by 40% compared to pre-BASIX (2004) buildings. In order to meet these expectations, many people install rainwater tanks and environmentally friendly, water saving spouts.
Developers have complained about additional costs for apartment buildings.
Monitoring
Monitoring of occupied BASIX single dwellings conducted by Sydney Water confirmed that the actual water savings were 40.6% in 2008 and 37.6% in 2009. BASIX has won a number of awards in Australia, the most recent one being in 2007 for "Energy in Society - For contributing to policy, law and the community" at the Australian Institute of Energy. Experts of Economics say that by 2050, the total net savings in New South Wales is between $294 million and $1.1 billion.
See also
Green building in Australia
References
External links
BASIX - Building Sustainability Index
Energy Labels For Homes & Business Premises
A Guide to BASIX Certificates
A Detailed Guide On The BASIX Certificate: Smart Solution For Sustainable Homes
Sustainable building in Australia
Building energy rating
Urban planning in Australia
Energy in New South Wales | BASIX | [
"Engineering"
] | 278 | [
"Architecture stubs",
"Architecture"
] |
11,033,525 | https://en.wikipedia.org/wiki/Radio%20atmospheric%20signal | A radio atmospheric signal or sferic (sometimes also spelled "spheric") is a broadband electromagnetic impulse that occurs as a result of natural atmospheric lightning discharges. Sferics may propagate from their lightning source without major attenuation in the Earth–ionosphere waveguide, and can be received thousands of kilometres from their source. On a time-domain plot, a sferic may appear as a single high-amplitude spike in the time-domain data. On a spectrogram, a sferic appears as a vertical stripe (reflecting its broadband and impulsive nature) that may extend from a few kHz to several tens of kHz, depending on atmospheric conditions.
Sferics received from about distance or greater have their frequencies slightly offset in time, producing tweeks.
When the electromagnetic energy from a sferic escapes the Earth-ionosphere waveguide and enters the magnetosphere, it becomes dispersed by the near-Earth plasma, forming a whistler signal. Because the source of the whistler is an impulse (i.e., the sferic), a whistler may be interpreted as the impulse response of the magnetosphere (for the conditions at that particular instant).
Introduction
A lightning channel with all its branches and its electric currents behaves like a huge antenna system from which electromagnetic waves of all frequencies are radiated. Beyond a distance where luminosity is visible and thunder can be heard (typically about 10 km), these electromagnetic impulses are the only sources of direct information about thunderstorm activity on the ground. Transients electric currents during return strokes (R strokes) or intracloud strokes (K strokes) are the main sources for the generation of impulse-type electromagnetic radiation known as sferics (sometimes called atmospherics). While this impulsive radiation dominates at frequencies less than about 100 kHz, (loosely called long waves), a continuous noise component becomes increasingly important at higher frequencies. The longwave electromagnetic propagation of sferics takes place within the Earth-ionosphere waveguide between the Earth's surface and the ionospheric D- and E- layers. Whistlers generated by lightning strokes can propagate into the magnetosphere along the geomagnetic lines of force. Finally, upper-atmospheric lightning or sprites, that occur at mesospheric altitudes, are short-lived electric breakdown phenomena, probably generated by giant lightning events on the ground.
Source properties
Basic stroke parameters
In a typical cloud-to-ground stroke (R stroke), negative electric charge (electrons) of the order of stored within the lightning channel is lowered to the ground within a typical impulse time interval of This corresponds to an average current flowing within the channel of the order of Maximum spectral energy is generated near frequencies of or at wavelengths of (where is the speed of light). In typical intracloud K-strokes, positive electric charge of the order of in the upper part of the channel and an equivalent amount of negative charge in its lower part neutralize within a typical time interval of The corresponding values for average electric current, frequency and wavelength are and The energy of K-strokes is in general two orders of magnitude weaker than the energy of R-strokes.
The typical length of lightning channels can be estimated to be of the order of for R-strokes and for K-strokes. Often, a continuing current component flows between successive R-strokes. Its "pulse" time typically varies between about its electric current is of the order of corresponding to the numbers of and Both R-strokes as well as K-strokes produce sferics seen as a coherent impulse waveform within a broadband receiver tuned between 1–100 kHz. The electric field strength of the impulse increases to a maximum value within a few microseconds and then declines like a damped oscillator. The orientation of the field strength increase depends on whether it is a negative or a positive discharge
The visible part of a lightning channel has a typical length of about 5 km. Another part of comparable length may be hidden in the cloud and may have a significant horizontal branch. Evidently, the dominant wavelength of the electromagnetic waves of R- and K-strokes is much larger than their channel lengths. The physics of electromagnetic wave propagation within the channel must thus be derived from full wave theory, because the ray concept breaks down.
Electric channel current
The channel of a R stroke can be considered as a thin isolated wire of length L and diameter d in which negative electric charge has been stored. In terms of electric circuit theory, one can adopt a simple transmission line model with a capacitor, where the charge is stored, a resistance of the channel, and an inductance simulating the electric properties of the channel. At the moment of contact with the perfectly conducting Earth surface, the charge is lowered to the ground. In order to fulfill the boundary conditions at the top of the wire (zero electric current) and at the ground (zero electric voltage), only standing resonant waves modes can exit. The fundamental mode which transports electric charge to the ground most effectively, has thus a wavelength λ four times the channel length L. In the case of the K stroke, the lower boundary is the same as the upper boundary. Of course, this picture is valid only for wave mode 1 (λ/4 antenna)
and perhaps for mode 2 (λ/2 antenna), because these modes do not yet "feel" the contorted configuration of the real lightning channel. The higher order modes contribute to the incoherent noisy signals in the higher frequency range (> 100 kHz).
Transfer function of Earth–ionosphere waveguide
Sferics can be simulated approximately by the electromagnetic radiation field of a vertical Hertzian dipole antenna. The maximum spectral amplitude of the sferic typically is near 5 kHz. Beyond this maximum, the spectral amplitude decreases as 1/f if the Earth's surface were perfectly conducting. The effect of the real ground is to attenuate the higher frequencies more strongly than the lower frequencies (Sommerfeld's ground wave).
R strokes emit most of their energy within the ELF/VLF range (ELF = extremely low frequencies, < 3 kHz; VLF = very low frequencies, 3–30 kHz). These waves are reflected and attenuated on the ground as well as within the ionospheric D layer, near 70 km altitude during day time conditions, and near 90 km height during the night. Reflection and attenuation on the ground depends on frequency, distance, and orography. In the case of the ionospheric D-layer, it depends, in addition, on time of day, season, latitude, and the geomagnetic field in a complicated manner. VLF propagation within the Earth–ionosphere waveguide can be described by ray theory and by wave theory.
When distances are less than about 500 km (depending on frequency), then ray theory is appropriate. The ground wave and the first hop (or sky) wave reflected at the ionospheric D layer interfere with each other.
At distances greater than about 500 km, sky waves reflected several times at the ionosphere must be added. Therefore, mode theory is here more appropriate. The first mode is least attenuated within the Earth–ionosphere waveguide, and thus dominates at distances greater than about 1000 km.
The Earth–ionosphere waveguide is dispersive. Its propagation characteristics are described by a transfer function T(ρ, f) depending mainly on distance ρ and frequency f. In the VLF range, only mode one is important at distances larger than about 1000 km. Least attenuation of this mode occurs at about 15 kHz. Therefore, the Earth–ionosphere waveguide behaves like a bandpass filter, selecting this band out of a broadband signal. The 15 kHz signal dominates at distances greater than about 5000 km. For ELF waves (< 3 kHz), ray theory becomes invalid, and only mode theory is appropriate. Here, the zeroth mode begins to dominate and is responsible for the second window at greater distances.
Resonant waves of this zeroth mode can be excited in the Earth–ionosphere waveguide cavity, mainly by the continuing current components of lightning flowing between two return strokes. Their wavelengths are integral fractions of the Earth's circumference, and their resonance frequencies can thus be approximately determined by fm ≃ mc/(2πa) ≃ 7.5 m Hz (with m = 1, 2, ...; a the Earth's radius and c the speed of light). These resonant modes with their fundamental frequency of f1 ≃ 7.5 Hz are known as Schumann resonances.
Monitoring thunderstorm activity with sferics
About 100 lightning strokes per second are generated all over the world excited by thunderstorms located mainly in the continental areas at low and middle latitudes. In order to monitor the thunderstorm activity, sferics are the appropriate means.
Measurements of Schumann resonances at only a few stations around the world can monitor the global lightning activity fairly well. One can apply the dispersive property of the Earth–ionosphere waveguide by measuring the group velocity of a sferic signal at different frequencies together with its direction of arrival. The group time delay difference of neighbouring frequencies in the lower VLF band is directly proportional to the distance of the source. Since the attenuation of VLF waves is smaller for west to east propagation and during the night, thunderstorm activity up to distances of about 10,000 km can be observed for signals arriving from the west during night time conditions. Otherwise, the transmission range is of the order of 5,000 km.
For the regional range (< 1,000 km), the usual way is magnetic direction finding as well as time of arrival measurements of a sferic signal observed simultaneously at several stations. Presumption of such measurements is the concentration on one individual impulse. If one measures simultaneously several pulses, interference takes place with a beat frequency equal to the inversal average sequence time of the pulses.
Atmospheric noise
The signal-to-noise ratio determines the sensibility and sensitivity of telecommunication systems (e.g., radio receivers). An analog signal must clearly exceed the noise amplitude in order to become detectable. Atmospheric noise is one of the most important sources for the limitation of the detection of radio signals.
The steady electric discharging currents in a lightning channel cause a series of incoherent impulses in the whole frequency range, the amplitudes of which decreases approximately with the inverse frequency. In the ELF-range, technical noise from 50 to 60 Hz, natural noise from the magnetosphere, etc. dominates. In the VLF-range, there are the coherent impulses from R- and K-strokes, appearing out of the background noise. Beyond about 100 kHz, the noise amplitude becomes more and more incoherent. In addition, technical noise from electric motors, ignition systems of motor cars, etc., are superimposed. Finally, beyond the high frequency band (3–30 MHz) extraterrestrial noise (noise of galactic origin, solar noise) dominates.
The atmospheric noise depends on frequency, location and time of day and year. Worldwide measurements of that noise are documented in CCIR-reports.
See also
1955 Great Plains tornado outbreak
Cluster One, a Pink Floyd track using sferics and dawn chorus as an overture
Footnotes
References
External links
http://www.srh.noaa.gov/oun/wxevents/19550525/stormelectricity.php
Radio in Space and Time - Whistler, Sferics and Tweeks, G.Wiessala in RadioUser 1/2013, UK
Atmospheric electricity
Lightning
Electrical phenomena
Electromagnetism
Space plasmas
Severe weather and convection | Radio atmospheric signal | [
"Physics"
] | 2,400 | [
"Space plasmas",
"Physical phenomena",
"Electromagnetism",
"Atmospheric electricity",
"Astrophysics",
"Electrical phenomena",
"Fundamental interactions",
"Lightning"
] |
11,033,535 | https://en.wikipedia.org/wiki/Less-than%20sign | The less-than sign is a mathematical symbol that denotes an inequality between two values. The widely adopted form of two equal-length strokes connecting in an acute angle at the left, , has been found in documents dated as far back as the 1560s. In mathematical writing, the less-than sign is typically placed between two values being compared and signifies that the first number is less than the second number. Examples of typical usage include and .
Since the development of computer programming languages, the less-than sign and the greater-than sign have been repurposed for a range of uses and operations.
Computing
The less-than sign, , is an original ASCII character (hex 3C, decimal 60).
Programming
In BASIC, Lisp-family languages, and C-family languages (including Java and C++), comparison operator < means "less than".
In Coldfusion, operator .lt. means "less than".
In Fortran, operator .LT. means "less than"; later versions allow <.
Shell scripts
In Bourne shell (and many other shells), operator -lt means "less than". Less-than sign is used to redirect input from a file. Less-than plus ampersand () is used to redirect from a file descriptor.
Double less-than sign
The double less-than sign, , may be used for an approximation of the much-less-than sign () or of the opening guillemet (). ASCII does not encode either of these signs, though they are both included in Unicode.
In Bash, Perl, and Ruby, operator (where "EOF" is an arbitrary string, but commonly "EOF" denoting "end of file") is used to denote the beginning of a here document.
In C and C++, operator represents a binary left shift.
In the C++ Standard Library, operator , when applied on an output stream, acts as insertion operator and performs an output operation on the stream.
In Ruby, operator acts as append operator when used between an array and the value to be appended.
In XPath the operator returns true if the left operand precedes the right operand in document order; otherwise it returns false.
Triple less-than sign
In PHP, operator is used to denote the beginning of a heredoc statement (where OUTPUT is an arbitrary named variable.)
In Bash, is used as a "here string", where is expanded and supplied to the command on its standard input, similar to a heredoc.
Less-than sign with equals sign
The less-than sign with the equals sign, , may be used for an approximation of the less-than-or-equal-to sign, . ASCII does not have a less-than-or-equal-to sign, but Unicode defines it at code point U+2264.
In BASIC, Lisp-family languages, and C-family languages (including Java and C++), operator means "less than or equal to". In Sinclair BASIC it is encoded as a single-byte code point token.
In Prolog, means "less than or equal to" (as distinct from the arrow ).
In Fortran, operators and both mean "less than or equal to".
In Bourne shell and Windows PowerShell, the operator means "less than or equal to".
Less-than sign with hyphen-minus
In the R programming language, the less-than sign is used in conjunction with a hyphen-minus to create an arrow (), this can be used as the left assignment operator.
Spaceship operator
The less-than sign is used in the spaceship operator.
HTML
In HTML (and SGML and XML), the less-than sign is used at the beginning of tags. The less-than sign may be included with <. The less-than-or-equal-to sign, , may be included with ≤.
Unicode
Unicode provides various Less Than Symbol:
The less-than sign may be seen for an approximation of the opening angle bracket, . True angle bracket characters, as required in linguistics notation, are expected in formal texts.
Mathematics
In an inequality, the less-than sign and greater-than sign always "point" to the smaller number. Put another way, the "jaws" (the wider section of the symbol) always direct to the larger number.
The less-than-sign is sometimes used to represent a total order, partial order or preorder. However, the symbol is often used when it would be confusing or not convenient to use . In mathematical writing using LaTeX, the TeX command is . The Unicode code point is .
See also
Inequality (mathematics)
Greater-than sign
Relational operator
Much-less-than sign
References
Typographical symbols
Mathematical symbols
Inequalities | Less-than sign | [
"Mathematics"
] | 1,003 | [
"Mathematical theorems",
"Symbols",
"Binary relations",
"Mathematical symbols",
"Mathematical relations",
"Inequalities (mathematics)",
"Typographical symbols",
"Mathematical problems"
] |
11,033,536 | https://en.wikipedia.org/wiki/Greater-than%20sign | The greater-than sign is a mathematical symbol that denotes an inequality between two values. The widely adopted form of two equal-length strokes connecting in an acute angle at the right, , has been found in documents dated as far back as 1631. In mathematical writing, the greater-than sign is typically placed between two values being compared and signifies that the first number is greater than the second number. Examples of typical usage include and . The less-than sign and greater-than sign always "point" to the smaller number. Since the development of computer programming languages, the greater-than sign and the less-than sign have been repurposed for a range of uses and operations.
History
The earliest known use of the symbols and is found in (The Analytical Arts Applied to Solving Algebraic Equations) by Thomas Harriot, published posthumously in 1631. The text states " a > b a b (The sign of majority a > b indicates that a is greater than b)" and " a < b a b (The sign of minority a < b indicates that a is less than b)."
According to historian Art Johnson, while Harriot was surveying North America, he saw a Native American with a symbol that resembled the greater-than sign, in both backwards and forwards forms. Johnson says it is likely Harriot developed the two symbols from this symbol.
Usage in text markup
Angle brackets
The greater-than sign is sometimes used for an approximation of the closing angle bracket, . The proper Unicode character is . ASCII does not have angular brackets.
HTML
In HTML (and SGML and XML), the greater-than sign is used at the end of tags. The greater-than sign may be included with , while produces the greater-than or equal to sign.
E-mail and Markdown
In some early e-mail systems, the greater-than sign was used to denote quotations.
The sign is also used to denote quotations in Markdown.
Usage in programming
The 'greater-than sign' is encoded in ASCII as character hex 3E, decimal 62. The Unicode code point is , inherited from ASCII.
For use with HTML, the mnemonics or may also be used.
Programming language
BASIC and C-family languages (including Java and C++) use the comparison operator to mean "greater than". In Lisp-family languages, is a function used to mean "greater than".
In Coldfusion and Fortran, operator means "greater than".
Double greater-than sign
is used for an approximation of the much-greater-than sign . ASCII does not have the much greater-than sign.
The double greater-than sign is also used for an approximation of the closing guillemet, .
In Java, C, and C++, the operator is the right-shift operator. In C++ it is also used to get input from a stream, similar to the C functions and .
In Haskell, the function is a monadic operator. It is used for sequentially composing two actions, discarding any value produced by the first. In that regard, it is like the statement sequencing operator in imperative languages, such as the semicolon in C.
In XPath the operator returns true if the left operand follows the right operand in document order; otherwise it returns false.
Triple greater-than sign
is the unsigned-right-shift operator in JavaScript. Three greater-than signs form the distinctive prompt of the firmware console in MicroVAX, VAXstation, and DEC Alpha computers (known as the SRM console in the latter). This is also the default prompt of the Python interactive shell, often seen for code examples that can be executed interactively in the interpreter:
python
Python 3.9.2 (default, Feb 20 2021, 18:40:11)
[GCC 10.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> print("Hello World")
Hello World
>>>
Greater-than sign with equals sign
is sometimes used for an approximation of the greater than or equal to sign, which was not included in the ASCII repertoire. The sign is, however, provided in Unicode, as .
In BASIC, Lisp-family languages, and C-family languages (including Java and C++), operator means "greater than or equal to". In Sinclair BASIC it is encoded as a single-byte code point token.
In Fortran, operator means "greater than or equal to".
In Bourne shell and Windows PowerShell, the operator means "greater than or equal to".
In Lua, operator means "greater than or equal to" and is used like this
x = math.random(1,9)
y = 5
if x >= y then
print("x("..x..") is more or equal to y("..y..")")
else
print("x("..x..") is less than y("..y..")")
end
expected output:
or
Hyphen-minus with greater-than sign
is used in some programming languages (for example F#) to create an arrow. Arrows like these could also be used in text where other arrow symbols are unavailable. In the R programming language, this can be used as the right assignment operator. In the C, C++, and PHP, this is used as a member access operator. In Swift and Python, it is used to indicate the return value type when defining a function (i.e., ).
Shell scripts
In Bourne shell (and many other shells), greater-than sign is used to redirect output to a file. Greater-than plus ampersand () is used to redirect to a file descriptor.
Spaceship operator
Greater-than sign is used in the 'spaceship operator', .
ECMAScript and C#
In ECMAScript and C#, the greater-than sign is used in lambda function expressions.
In ECMAScript:
const square = x => x * x;
console.log(square(5)); // 25
In C#:
Func<int, int> square = x => x * x;
Console.WriteLine(square(5)); // 25
PHP
In PHP, the greater-than sign is used in conjunction with the less-than sign as a not equal to operator. It is the same as the != operator.
$x = 5;
$y = 3;
$z = 5;
echo $x <> $y; // true
echo $x <> $z; // false
Unicode
Unicode provides various greater than symbols: (use ⇕ controls to change sort order temporarily)
See also
Inequality (mathematics)
Less-than sign
Relational operator
Mathematical operators and symbols in Unicode
Guillemet
Material conditional
References
Typographical symbols
Mathematical symbols
Inequalities | Greater-than sign | [
"Mathematics"
] | 1,450 | [
"Mathematical theorems",
"Symbols",
"Binary relations",
"Mathematical symbols",
"Mathematical relations",
"Inequalities (mathematics)",
"Typographical symbols",
"Mathematical problems"
] |
11,033,818 | https://en.wikipedia.org/wiki/Danskin%27s%20theorem | In convex analysis, Danskin's theorem is a theorem which provides information about the derivatives of a function of the form
The theorem has applications in optimization, where it sometimes is used to solve minimax problems. The original theorem given by J. M. Danskin in his 1967 monograph provides a formula for the directional derivative of the maximum of a (not necessarily convex) directionally differentiable function.
An extension to more general conditions was proven 1971 by Dimitri Bertsekas.
Statement
The following version is proven in "Nonlinear programming" (1991). Suppose is a continuous function of two arguments,
where is a compact set.
Under these conditions, Danskin's theorem provides conclusions regarding the convexity and differentiability of the function
To state these results, we define the set of maximizing points as
Danskin's theorem then provides the following results.
Convexity
is convex if is convex in for every .
Directional semi-differential
The semi-differential of in the direction , denoted is given by where is the directional derivative of the function at in the direction
Derivative
is differentiable at if consists of a single element . In this case, the derivative of (or the gradient of if is a vector) is given by
Example of no directional derivative
In the statement of Danskin, it is important to conclude semi-differentiability of and not directional-derivative as explains this simple example.
Set , we get which is semi-differentiable with but has not a directional derivative at .
Subdifferential
If is differentiable with respect to for all and if is continuous with respect to for all , then the subdifferential of is given by where indicates the convex hull operation.
Extension
The 1971 Ph.D. Thesis by Dimitri P. Bertsekas (Proposition A.22) proves a more general result, which does not require that is differentiable. Instead it assumes that is an extended real-valued closed proper convex function for each in the compact set that the interior of the effective domain of is nonempty, and that is continuous on the set Then for all in the subdifferential of at is given by
where is the subdifferential of at for any in
See also
Maximum theorem
Envelope theorem
Hotelling's lemma
References
Convex optimization
Theorems in analysis | Danskin's theorem | [
"Mathematics"
] | 460 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Mathematical problems",
"Mathematical theorems"
] |
11,034,412 | https://en.wikipedia.org/wiki/Ferrallitisation | Ferrallitisation is the process in which rock is changed into a soil consisting of clay (kaolinite) and sesquioxides, in the form of hydrated oxides of iron and aluminium. In humid tropical areas, with consistently high temperatures and rainfall for all or most of the year, chemical weathering rapidly breaks down the rock. This at first produces clays which later also break down to form silica. The silica is removed by leaching and the sesquioxides of iron and aluminium remain, giving the characteristic red colour of many tropical soils. Ferrallitisation is the reverse of podsolisation, where silica remains and the iron and aluminum are removed. In tropical rain forests with rain throughout the year, ferrallitic soils develop. In savanna areas, with altering dry and wet climates, ferruginous soils occur.
Further reading
Biogeochemical cycle
Land management
Natural resources
Soil science | Ferrallitisation | [
"Chemistry"
] | 194 | [
"Biogeochemical cycle",
"Biogeochemistry",
"Geochemistry stubs"
] |
11,034,903 | https://en.wikipedia.org/wiki/Formula%20One%20engines | This article gives an outline of Formula One engines, also called Formula One power units since the hybrid era starting in 2014. Since its inception in 1947, Formula One has used a variety of engine regulations. Formulae limiting engine capacity had been used in Grand Prix racing on a regular basis since after World War I. The engine formulae are divided according to era.
Characteristics
Formula One currently uses 1.6 litre four-stroke turbocharged 90 degree V6 double-overhead camshaft (DOHC) reciprocating engines. They were introduced in 2014 and have been developed over the subsequent seasons. Mostly from the 2023 season, specifications on Formula One engines, including the software used to control them and the maximum per-engine price to F1 teams of € 15,000,000, have been frozen until the end of 2025, when the completely new 2026 spec will come into effect.
High revolutions
The history of F1 engines has always been a quest for more power, and the enormous power a Formula One engine produces had been generated by operating at a very high rotational speed, reaching over 20,000 revolutions per minute (rpm) during the 2004-2005 seasons. This is because an engine, theoretically, produces double the power when operated twice as fast if combustion (thermal) efficiency and energy loss remain the same. High-revving engines won races no matter how much fuel it consumed and how much wasted heat it generated, as long as they produced more power over the competition. However, with the skyrocketing cost of exotic materials and production methods enabling the high-speed operation, and the realisation that such advancements in technology would likely never applied to production vehicles (because the resultant product is very inefficient), it was decided to limit the maximum rotational speed (rev) to 19,000 rpm in 2007. The maximum rev was further limited to 18,000 rpm in 2009, and to 15,000 rpm for the 2014-2021 seasons.
Still, the high speed operation of F1 engines contrasts with road car engines of a similar size, which typically operate at less than 6,000 rpm.
Valve springs
Until the mid-1980s Formula One engines were limited to around 12,000 rpm due to the traditional metal springs used to close the valves. The speed required to close the valves at a higher rpm called for ever stiffer springs, which increased the power required to drive the camshaft to open the valves, to the point where the loss nearly offset the power gain through the increase in rpm. They were replaced by pneumatic valve springs introduced by Renault in 1986, which inherently have a rising rate (progressive rate) that allowed them to have an extremely high spring rate at larger valve strokes without much increasing the driving power requirements at smaller strokes, thus lowering the overall power loss. Since the 1990s, all Formula One engine manufacturers have used pneumatic valve springs with pressurised air.
Piston speed
In addition to the use of pneumatic valve springs, a Formula One engine's high rpm output has been made possible due to advances in metallurgy and design, allowing lighter pistons and connecting rods to withstand the accelerations necessary to attain such high speeds. Improved design also allows narrower connecting rod ends and so narrower main bearings. This permits higher rpm with less bearing-damaging heat build-up. For each stroke, the piston goes from a virtual stop to almost twice the mean speed (approximately 40 m/s), then back to zero. This occurs once for each of the four strokes in the cycle: one Intake (down), one Compression (up), one Power (ignition-down), one Exhaust (up). Maximum piston acceleration occurs at top dead center (TDC) and is in the region of 95,000 m/s2, about 9,700 times standard gravity (9,700 G).
To lower the maximum piston/conrod acceleration, Formula One cars use short-stroke, multi-cylinder engines that result in lower average piston speed for a given displacement. After seeing some 16 cylinder engines, the number of cylinders was limited to twelve in 1989, ten in 2000, eight in 2006 and six in 2014. These regulation changes made higher-speed designs more difficult and less efficient. To operate at high engine speeds under such limits, the stroke must be short to prevent catastrophic failure, usually from the connecting rod, which is under very large stresses. Having a short stroke means a relatively large bore is required to reach a given displacement. This results in less efficient combustion, due mostly to flame-front propagation having to travel the long distance (for a volume) of ever thinner disk (larger diameter with less height) -shaped combustion chamber deviating far away from the ideal sphere shape with the tip of spark plug at its center.
Notes:
Efficiency
Due to the higher speed operation and the tighter restriction on the number of cylinders, efficiency of a naturally aspirated Formula One engine did not improve much since the 1967 Ford Cosworth DFV and the mean effective pressure stayed at around 14 bar (1.4 MPa) for a long time.
From the 2014 season, a new concept of limiting the maximum fuel flow rate was introduced, which limits the power if energy loss and air/fuel ratio are constant. While the bore and stroke figures are now fixed by the rules, this regulation promoted the competition to improve powertrain efficiency. As energy loss increases nearly exponentially with engine speed, the rev limit became meaningless, so it was lifted in 2022. Currently, F1 engines rev up to about 13,000rpm, while the combustion efficiency has risen to about 40 bar BMEP and beyond, using lean and rapid burn techniques enabling λ<1 (average air/fuel ratio much leaner than 14.7:1 by mass) and very high mechanical and effective compression ratios.
In addition, energy recovery systems from exhaust pressure (MGU-Heat) and engine brake (MGU-Kinetic) are allowed to further improve efficiency. MGU-H is an electric motor/generator on the common shaft between the exhaust turbine and intake compressor of the turbocharger, while MGU-K is also an electric motor/generator driven by (or driving) crankshaft at a fixed ratio.
Together with improvements on these energy recovery systems, F1 engines increased power using the same amount of fuel in recent years. For example, Honda RA621H engine of 2021 season generated over more maximum power over RA615H of the 2015 season at the same 100 kg/h fuel flow rate.
With the hugely improved efficiency of the combustion, mechanicals/software and turbocharger, F1 engines are generating much less heat and noise compared to the levels in 2014, and Stefano Domenicali said the 2026 regulation will impose intentionally louder exhaust sound to please the fans.
Notes:
History
Formula One engines have come through a variety of regulations, manufacturers and configurations through the years. It is imperative to understand the distinction among the terms "Grand Prix", "World Championship" and "Formula One" to come to grips with the history.
Car racing in various forms began almost immediately after the invention of the automobile, and many of the first organised car racing events were held in Europe before 1900. There had been the tradition of calling a particular race in an event with the name of the award given to the winner in France and some other countries, as traditional racing events often had multiple races and classes, like Men, Women, 100m, 1500m, breast-stroke, etc. In the case of the car race held in Pau, France in 1900, there were no class divisions, and no prize on record was given to the winner, René de Knyff driving a Panhard et Revassor (2.1L, 4 cylinder engine called the 'Phoenix' jointly developed with Gottlieb Daimler in Germany, about 20 hp), who became the commissioner of the CSI later. In 1901, the event was named "Semaine de Pau (Week in Pau)" held at Circuit du Sud-Ouest, and the prizes awarded to the winners were "Grand Prix de Pau (Grand Prize of Pau)" for the "650 kg or heavier" class, "Grand Prix du Palais d'Hiver (Grand Prize of the Winter Palace)" for "400 - 650 kg" class, and "Second Grand Prix du Palais d'Hiver" for the "under 400 kg" class. This event is significant not only because it called the prizes Grand Prix, but also because it was one of the very first automobile race events, including the fastest class of cars, held on a closed circuit (the 1900 race was on an open road).
During and after World War I (1914 - 1918), it became obvious that the size of engines (and if they were supercharged), not the size and weight of cars, primarily determined how fast they could run. Also, wealthy people started enjoying racing the smaller and more evenly-matched Voiturette cars more than the no-limits "Voiture" 5-11L (mostly 4-cylinder) behemoths that contested the fastest class. In 1926, then-current Voiturette regulation of "up to 1,500 cc, supercharged" was adopted to the formerly-unlimited Voiture class of Grand Prix races in France, and Voiturette class was re-defined as "up to 1,100 cc, no supercharger".
Formula One was born as the first internationally unified regulation to define a class of racing cars in 1946 to be effective 1947. It was defined by Commission Sportive Internationale (CSI), the sporting branch of Fédération Internationale de l'Automobile (FIA), reflecting the Voiture regulation of "up to 1,500 cc supercharged, or 4,500 cc without supercharger". After Formula One was more or less 'ratified' or accepted by other countries, Formula Two was defined in 1947 as "up to 500 cc supercharged, or 2,000 cc without".
In contrast to the pre-existed European Drivers' Championship, Formula One events were meant to be competition among the countries. Each car, or team, represented a country in this 'international' race, with the cars painted in the "national colours", like red for Italy, green for the UK, silver for Germany, and blue for France. The World Championship for Drivers was defined by the CSI in 1949 for 1950 and onwards to honour the drivers, instead of the countries they represented. The World Championship for Constructors started in 1958, created partly to resolve the then-common dispute between a winning driver and his team on the ownership of the Grand Prix trophy. These championships had a longer-term effect of downplaying the country representation.
Over the years, Formula One added more and more regulations, not only on engines but chassis, tyres, fuel, inspections, championship points, penalties, safety measures, cost control, licensing, distribution of profits, how the qualifying and races must be governed and run, etc., etc. Today, the vast regulations on Power Unit are a very small part of what defines Formula One, which regulates even the number of Summer vacation days the constructor factories must observe.
1947–1953
This era used pre-war voiturette engine regulations, with 4.5 L atmospheric and 1.5 L supercharged engines. The Indianapolis 500 (which was a round of the World Drivers' Championship from 1950 onwards) used pre-war Grand Prix regulations, with 4.5 L atmospheric and 3.0 L supercharged engines. The power range was up to , though the BRM Type 15 of 1953 reportedly achieved with a 1.5 L supercharged engine.
In 1952 and 1953, the World Drivers' Championship was run to Formula Two regulations, but the existing Formula One regulations remained in force and a number of Formula One races were still held in those years.
1954–1960
Naturally-aspirated engine size was reduced to 2.5 L and supercharged cars were limited to 750 cc. No constructor built a supercharged engine for the World Championship. The Indianapolis 500 continued to use old pre-war regulations. The power range was up to .
1961–1965
Introduced in 1961 amidst some criticism, the new reduced engine 1.5 L formula took control of F1 just as every team and manufacturer switched from front to mid-engined cars. Although these were initially underpowered, by 1965 average power had increased by nearly 50% and lap times were faster than in 1960. The old 2.5 L formula had been retained for International Formula racing, but this did not achieve much success until the introduction of the Tasman Series in Australia and New Zealand during the winter season, leaving the 1.5 L cars as the fastest single seaters in Europe during this time. The power range was between and .
1966–1986
In 1966, with sports cars capable of outrunning Formula One cars thanks to much larger and more powerful engines, the FIA increased engine capacity to 3.0 L atmospheric and 1.5 L compressed engines. Although a few manufacturers had been aiming for larger engines, the transition was not smooth and 1966 was a transitional year, with 2.0 L versions of the BRM and Coventry-Climax V8 engines being used by several entrants. The appearance of the standard-produced Cosworth DFV in 1967 made it possible for small manufacturers to join the series with a chassis designed in-house. Compression devices were allowed for the first time since 1960, but it was not until 1977 that a company actually had the finance and interest of building one, when Renault debuted their new Gordini V6 turbocharged engine at that year's British Grand Prix at Silverstone. This engine had a considerable power advantage over the naturally-aspirated Cosworth DFV, Ferrari and Alfa Romeo engines.
By the start of the 1980s, Renault had proved that turbocharging was the way to go in order to stay competitive in Formula One, particularly at high-altitude circuits like Kyalami in South Africa and Interlagos in Brazil. Ferrari introduced their all-new V6 turbocharged engine in 1981, before Brabham owner Bernie Ecclestone managed to persuade BMW to manufacture straight-4 turbos for his team from 1982 onwards. In 1983, Alfa Romeo introduced a V8 turbo, and by the end of that year Honda and Porsche had introduced their own V6 turbos (the latter badged as TAG in deference to the company that provided the funding). Cosworth and the Italian Motori Moderni concern also manufactured V6 turbos during the 1980s, while Hart Racing Engines manufactured their own straight-4 turbo.
By mid-1985, every Formula One car was running with a turbocharged engine. In 1986, power figures were reaching unprecedented levels, with all engines reaching over during qualifying with unrestricted turbo boost pressures. This was especially seen with the BMW straight-4 turbo, the M12/13, which produced around at 5.5 bar of boost in qualifying trim, but was detuned to produce between in race spec. However, these engines and gearboxes were very unreliable because of the engine's immense power, and would only last about four laps. For the race, the turbocharger's boost was restricted to ensure engine reliability; but the engines still produced during the race.
The power range from 1966 to 1986 was between to , turbos to in race trim, and in qualifying, up to . Following their experiences at Indianapolis, in 1971 Lotus made a few unsuccessful experiments with a Pratt & Whitney turbine fitted to chassis which also had four-wheel-drive.
1987–1988
Following the turbo domination, forced induction was allowed for two seasons before its eventual ban. The FIA regulations limited boost pressure, to 4 bar in qualifying in 1987 for 1.5 L turbo; and allowed a larger 3.5 L formula. Fuel tank sizes were further reduced in size to 150 litres for turbo cars to limit the amount of boost used in a race. These seasons were still dominated by turbocharged engines, the Honda RA167E V6 supplying Nelson Piquet winning the 1987 Formula One season on a Williams also winning the constructors championship, followed by TAG-Porsche P01 V6 in McLaren then Honda again with the previous RA166E for Lotus then Ferrari's own 033D V6.
The rest of the grid was powered by the Ford GBA V6 turbo in Benetton, with the only naturally-aspirated engine, the DFV-derived Ford-Cosworth DFZ 3.5 L V8 outputting in Tyrrell, Lola, AGS, March and Coloni. The massively powerful BMW M12/13 inline-four found in the Brabham BT55 tilted almost horizontally, and in upright position under the Megatron brand in Arrows and Ligier, producing at 3.8 bar in race trim, and an incredible at 5.5 bar of boost in qualifying spec. Zakspeed was building its own turbo inline-four, Alfa Romeo was to power the Ligiers with an inline-four but the deal fell through after initial testing had been carried out. Alfa was still represented by its old 890T V8 used by Osella, and Minardi was powered by a Motori Moderni V6.
In , six teams – McLaren, Ferrari, Lotus, Arrows, Osella and Zakspeed – continued with turbocharged engines, now limited to 2.5 bar. Honda's V6 turbo, the RA168E, which produced at 12,300 rpm in qualifying, powered the McLaren MP4/4 with which Ayrton Senna and Alain Prost won fifteen of the sixteen races between them. The Italian Grand Prix was won by Gerhard Berger in the Ferrari F1/87/88C, powered by the team's own V6 turbo, the 033E, with about at 12,000 rpm in qualifying and at 12,000 rpm in races. The Honda turbo also powered Lotus's 100T, while Arrows continued with the Megatron-badged BMW turbo, Osella continued with the Alfa Romeo V8 (now badged as an Osella) and Zakspeed continued with their own straight-4 turbo. All the other teams used naturally aspirated 3.5 L V8 engines: Benetton used the Cosworth DFR, which produced at 11,000 rpm; Williams, March and Ligier used the Judd CV, producing ; and the rest of the grid used the previous year's Cosworth DFZ.
1989–1994
Turbochargers were banned from the 1989 Formula One season, leaving only a naturally aspirated 3.5 L formula. Honda was still dominant with their RA109E 72° V10 giving @ 13,500 rpm on McLaren cars, enabling Prost to win the championship in front of his teammate Senna. Behind were the Renault RS1-powered Williams, a 67° V10 giving @ 12,500 rpm and the Ferrari with its 035/5 65° V12 giving at 13,000 rpm. Behind, the grid was powered mainly by Ford Cosworth DFR V8 giving @ 10,750 rpm except for a few Judd CV V8 in Lotus, Brabham and EuroBrun cars, and two oddballs: the Lamborghini 3512 80° V12 powering Lola, and the Yamaha OX88 75° V8 in Zakspeed cars. Ford started to try its new design, the 75° V8 HBA1 with Benetton.
The 1990 Formula One season was again dominated by Honda in McLarens with the @ 13,500 rpm RA100E powering Ayrton Senna and Gerhard Berger ahead of the @ 12,750 rpm Ferrari Tipo 036 of Alain Prost and Nigel Mansell. Behind them the Ford HBA4 for Benetton and Renault RS2 for Williams with @ 12,800 rpm were leading the pack powered by Ford DFR and Judd CV engines. The exceptions were the Lamborghini 3512 in Lola and Lotus, and the new Judd EV 76° V8 giving @ 12,500 rpm in Leyton House and Brabham cars. The two new contenders were the Life which built for themselves an F35 W12 with three four cylinders banks @ 60°, and Subaru giving Coloni a 1235 flat-12 from Motori Moderni
Honda was still leading the 1991 Formula One season in Senna's McLaren with the @ 13,500–14,500 rpm 60° V12 RA121E, just ahead of the Renault RS3 powered Williams benefiting from @ 12,500–13,000 rpm. Ferrari was behind with its Tipo 037, a new 65° V12 giving @ 13,800 rpm also powering Minardi, just ahead the Ford HBA4/5/6 in Benetton and Jordan cars. Behind, Tyrrell was using the previous Honda RA109E, Judd introduced its new GV with Dallara leaving the previous EV to Lotus, Yamaha were giving its OX99 70° V12 to Brabham, Lamborghini engines were used by Modena and Ligier. Ilmor introduced its LH10, a @ 13,000 rpm V10 which eventually became the Mercedes with Leyton House and Porsche sourced a little successful 3512 V12 to Footwork Arrows; the rest of the field was Ford DFR powered.
In 1992, the Renault engines became dominant, even more so following the departure from the sport of Honda at the end of 1992. The 3.5 L Renault V10 engines powering the Williams F1 team produced a power output between @ 13,000–14,300 rpm toward the end of the 3.5 L naturally-aspirated era, between 1992 and 1994. Renault-engined cars won the last three consecutive world constructors' championships of the 3.5 L formula era with Williams (1992–1994).
The Peugeot A4 V10, used by the McLaren Formula One team in 1994, initially developed @ 14,250 rpm. It was later further developed into the A6, which produced even more power, developing @ 14,500 rpm.
The EC Zetec-R V8, which powered the championship-winning Benetton team and Michael Schumacher in 1994, produced between @ 14,500 rpm.
By the end of the 1994 season, Ferrari's Tipo 043 V12 was putting out around @ 15,800 rpm, which is to date the most-powerful naturally-aspirated V12 engine ever used in Formula One. This was also the most powerful engine of 3.5-litre engine regulation era, before a reduction in engine capacity to 3 litres in 1995.
1995–2005
This era used a 3.0 L formula, with the power range varying (depending on engine tuning) between and , between 13,000 rpm and 20,000 rpm, and from eight to twelve cylinders. Despite engine displacement being reduced from 3.5 L, power figures and RPMs still managed to climb. Renault was the initial dominant engine supplier from 1995 until 1997, winning the first three world championships with Williams and Benetton in this era. The championship-winning 1995 Benetton B195 produced a power output of @ 15,200 rpm, and the 1996 championship-winning Williams FW18 produced @ 16,000 rpm; both from a shared Renault RS9 3.0 L V10 engine. The 1997 championship-winning FW19 produced between @ 16,000 rpm, from its Renault RS9B 3.0 L V10. Ferrari's last V12 engine, the Tipo 044/1, was used in . The engine's design was largely influenced by major regulation changes imposed by the FIA after the dreadful events during the year before: the V12 engine was reduced from 3.5 to 3.0 litres. The 3.0-litre engine produced around 700 hp (522 kW) 17,000 rpm in race trim; but was reportedly capable of producing up to 760 hp (567 kW) in its highest state of tune for qualification mode. Between 1995 and 2000, cars using this 3.0 L engine formula, imposed by the FIA, produced a constant power range (depending on engine type and tuning), varying between 600 hp and 815 hp. Most Formula One cars during the season comfortably produced a consistent power output of between , depending on whether a V8 or V10 engine configuration was used. From 1998 to 2000 it was Mercedes' power that ruled, giving Mika Häkkinen two world championships. The 1999 McLaren MP4/14 produced between 785 and 810 hp @ 17,000 rpm. Ferrari gradually improved their engine. In , they changed from their traditional V12 engine to a smaller and lighter V10 engine. They preferred reliability to power, losing out to Mercedes in terms of outright power initially. Ferrari's first V10 engine, in 1996, produced @ 15,550 rpm, down on power from their most powerful 3.5 L V12 (in 1994), which produced over @ 15,800 rpm, but up on power from their last 3.0 L V12 (in 1995), which produced @ 17,000 rpm. At the 1998 Japanese GP, Ferrari's 047D engine spec was said to produce over , and from 2000 onward, they were never short of power or reliability. To keep costs down, the 3.0 L V10 engine configuration was made fully mandatory for all teams in 2000 so that engine builders would not develop and experiment with other configurations. The V10 configuration had been the most popular since the banning of turbocharged engines in 1989, and no other configuration had been used since 1998.
BMW started supplying its engines to Williams from 2000. The engine was very reliable in the first season though slightly short of power compared to Ferrari and Mercedes units. The BMW E41-powered Williams FW22 produced around 810 hp @ 17,500 rpm, during the 2000 season. BMW went straight forward with its engine development. The P81, used during the 2001 season, was able to hit 17,810 rpm. Unfortunately, reliability was a large issue with several blowups during the season.
The BMW P82, the engine used by the BMW WilliamsF1 Team in 2002, had hit a peak speed of 19,050 rpm in its final evolutionary stage. It was also the first engine in the 3.0 litre V10-era to break through the 19,000 rpm wall, during the 2002 Italian Grand Prix's qualifying. BMW's P83 engine used in 2003 season managed an impressive 19,200 rpm and cleared the mark, at around 940 bhp, and weighs less than . Honda's RA003E V10 also cleared the mark at the 2003 Canadian Grand Prix.
In 2005, the 3.0 L V10 engine permitted no more than 5 valves per cylinder. Also, the FIA introduced new regulations limiting each car to one engine per two Grand Prix weekends, putting the emphasis on increased reliability. In spite of this, power outputs continued to rise. Mercedes engines had about in this season. Cosworth, Mercedes, Renault, and Ferrari engines all produced around to @ 19,000 rpm. Honda had over . The BMW engine made over . Toyota engines had over , according to Toyota Motorsport's executive Vice President, Yoshiaki Kinoshita. However, for reliability and longevity purposes, this power figure may have been detuned to around for races.
2006–2013
For 2006, the engines had to be 90° V8 of 2.4 litres maximum capacity with a circular bore of maximum, which implies a stroke at maximum bore. The engines must have two inlet and two exhaust valves per cylinder, be naturally aspirated and have a minimum weight. The previous year's engines with a rev-limiter were permitted for 2006 and 2007 for teams who were unable to acquire a V8 engine, with Scuderia Toro Rosso using a Cosworth V10, after Red Bull's takeover of the former Minardi team did not include the new engines. The 2006 season saw the highest rev limits in the history of Formula One, at well over 20,000 rpm; before a 19,000 rpm mandatory rev limiter was implemented for all competitors in 2007. Cosworth was able to achieve just over 20,000 rpm with their V8, and Renault around 20,500 rpm. Honda did the same; albeit only on the dynamometer.
Pre-cooling air before it enters the cylinders, injection of any substance other than air and fuel into the cylinders, variable-geometry intake and exhaust systems, and variable valve timing were forbidden. Each cylinder could have only one fuel injector and a single plug spark ignition. Separate starting devices were used to start engines in the pits and on the grid. The crankcase and cylinder block had to be made of cast or wrought aluminium alloys. The crankshaft and camshafts had to be made from an iron alloy, pistons from an aluminium alloy, and valves from alloys based on iron, nickel, cobalt or titanium. These restrictions were in place to reduce development costs on the engines.
The reduction in capacity was designed to give a power reduction of around 20% from the three-litre engines, to reduce the increasing speeds of Formula One cars. Despite this, in many cases the performance of the car improved. In 2006 Toyota F1 announced an approximate output at 18,000 rpm for its new RVX-06 engine, but real figures are of course difficult to obtain. Most cars from this period (2006–2008) produced a regular power output of approximately between 720 and 800 hp @ 19,000 rpm (over 20,000 rpm for the season).
The engine specification was frozen in 2007 to keep development costs down. The engines which were used in the 2006 Japanese Grand Prix were used for the 2007 and 2008 seasons and they were limited to 19,000 rpm. In 2009 the limit was reduced to 18,000 rpm with each driver allowed to use a maximum of 8 engines over the season. Any driver needing an additional engine is penalised 10 places on the starting grid for the first race the engine is used. This increases the importance of reliability, although the effect is only seen towards the end of the season. Certain design changes intended to improve engine reliability may be carried out with permission from the FIA. This has led to some engine manufacturers, notably Ferrari and Mercedes, exploiting this ability by making design changes which not only improve reliability but also boost engine power output as a side effect. As the Mercedes engine was proven to be the strongest, re-equalisations of engines were allowed by the FIA to allow other manufacturers to match the power.
2009 saw the exit of Honda from Formula One. The team was acquired by Ross Brawn, creating Brawn GP and the BGP 001. With the absence of the Honda engine, Brawn GP retrofitted the Mercedes engine to the BGP 001 chassis. The newly branded team won both the Constructors' Championship and the Drivers' Championship from better-known and better-established contenders Ferrari, McLaren-Mercedes, and Renault.
Cosworth, absent since the 2006 season, returned in 2010. New teams Lotus Racing, HRT, and Virgin Racing, along with the established Williams, used this engine. The season also saw the withdrawal of the BMW and Toyota engines, as the car companies withdrew from Formula One due to the Great Recession.
In 2009, constructors were allowed to use kinetic energy recovery systems (KERS), also called regenerative brakes. Energy can either be stored as mechanical energy (as in a flywheel) or as electrical energy (as in a battery or supercapacitor), with a maximum power of 81 hp (60 kW; 82 PS) deployed by an electric motor, for a little over 6 seconds per lap. Four teams used it at some point in the season: Ferrari, Renault, BMW, and McLaren.
Although KERS was still legal in F1 in the 2010 season, all the teams agreed not to use it. KERS returned for the 2011 season, when only three teams elected not to use it. For the 2012 season, only Marussia and HRT raced without KERS, and in 2013 all teams on the grid had KERS. From 2010 to 2013 cars have a regular power of 700–800 hp, averaging around 750 hp @ 18,000 rpm.
2014–2021
The FIA announced a change from the 2.4-litre V8, introducing 1.6-litre V6 hybrid engines (more than one power source) for the season. The new regulations allow kinetic and heat energy recovery systems. Forced induction was now allowedeither turbochargers, which last appeared in , or superchargerswith all constructors opting to use a turbocharger. Instead of limiting the boost level, the regulations introduced a fuel flow restriction at 100 kg of petrol per hour maximum. The engines sounded very different from the previous formula, due to the lower rev limit (15,000 rpm) and the turbocharger.
The new formula for turbocharged engines have their efficiency improved through turbo-compounding by recovering energy from exhaust gases. The original proposal for four-cylinder turbocharged engines was not welcomed by the racing teams, in particular Ferrari. Adrian Newey stated during the 2011 European Grand Prix that the change to a V6 enables teams to carry the engine as a stressed member, whereas an inline-4 would have required a space frame. A compromise was reached, allowing V6 forced induction engines instead. The engines rarely exceed 12,000 rpm during qualifying and race, due to the new fuel flow restrictions.
Energy recovery systems such as KERS had a boost of and 2 megajoules per lap. KERS was renamed Motor Generator Unit–Kinetic (). Heat energy recovery systems were also allowed, under the name Motor Generator Unit–Heat ()
The 2015 season was an improvement on 2014, adding about 30–50 hp (20–40 kW) to most engines, the Mercedes engine being the most powerful with 870 hp (649 kW). In 2019, Renault's engine was claimed to have hit 1,000 hp in qualifying trim.
Of the previous manufacturers, only Mercedes, Ferrari and Renault produced engines to the new formula in 2014, whereas Cosworth stopped supplying engines. Honda returned as an engine manufacturer in 2015, with McLaren switching to Honda power after using the Mercedes engine in 2014. In 2019, Red Bull switched from using a Renault engine to Honda power. Honda supplied both Red Bull and AlphaTauri. Honda withdrew as a power unit supplier at the end of , with Red Bull taking over the project and producing the engine in-house.
2022–2025
In 2017, the FIA began negotiations with existing constructors and potential new manufacturers over the next generation of engines with a projected introduction date of but delayed to due to the effects of the COVID-19 pandemic. The initial proposal was designed to simplify engine designs, cut costs, promote new entries and address criticisms directed at the 2014 generation of engines. It called for the 1.6 L V6 configuration to be retained, but abandoned the complex Motor Generator Unit–Heat () system. The Motor Generator Unit–Kinetic () would be more powerful, with a greater emphasis on driver deployment and a more flexible introduction to allow for tactical use. The proposal also called for the introduction of standardised components and design parameters to make components produced by all manufacturers compatible with one another in a system dubbed "plug in and play". A further proposal to allow four-wheel drive cars was also made, with the front axle driven by an unit—as opposed to the traditional driveshaft—that functioned independently of the providing power to the rear axle, mirroring the system developed by Porsche for the 919 Hybrid race car.
However, mostly due to no new engine supplier applying for F1 entry in 2021 and 2022, abolishment of the MGU-H, a more powerful MGU-K and a four-wheel drive system were all shelved with the possibility of their re-introduction for 2026. Instead, the teams and FIA agreed to a radical change in body/chassis aerodynamics to promote more battles on the course at closer distances to each other. They further agreed to an increase in alcohol content from 5.75% to 10% of fuel, and to implement a freeze on power unit design for 2022-2025, with the internal combustion engine (ICE), turbocharger and MGU-H being frozen on March 1 and the energy store, MGU-K and control electronics being frozen on September 1 during the 2022 season. Honda, the outgoing engine supplier in 2021, was keen to keep the MGU-H, and Red Bull, who took over the engine production project, backed that opinion. The 4WD system was planned to be based on Porsche 919 Hybrid system, but Porsche ended up not becoming an F1 engine supplier for 2021-2022.
2026 onwards
New engine regulations will be introduced from the 2026 season. These engine regulations will see the turbocharged 1.6 V6 internal combustion engine configuration used since 2014 retained. The new power units will produce over , although the power will come from different places. The MGU-H (Motor Generator Unit – Heat) will be banned, while the MGU-K's (Motor Generator Unit – Kinetic) output will increase to – previously the MGU-H and MGU-K produced a combined power output of . The power output of the internal combustion part of the power unit will decrease to from . In addition, fuel flow rates will be measured and limited based on energy, rather than mass or volume of the fuel itself. There is also intended to be further restrictions on components such as MGU-Ks and exhausts imposed from 2027. The new power units are due to be run on a fully sustainable fuel, being developed by Formula One.
Audi are due to become an engine provider from 2026 onwards. Ford are due to partner with Red Bull Powertrains as Red Bull Ford Powertrains from 2026 after a 20 year absence. Honda, under its subsidiary Honda Racing Corporation, has also entered as a manufacturer for 2026 by the FIA after officially leaving the sport in 2021. The FIA also confirmed that Ferrari, Mercedes-AMG and Alpine (Renault) were registered as power unit suppliers for 2026. However, on 30th September 2024, owing to lack of strong results with its power unit during the V6 turbo-hybrid era since it began in 2014, Renault announced it would be ending its engine programme following the conclusion of the 2025 championship and would not be making engines for the new 2026 regulations after all.
Engine regulation progression by era
Notes:
Current engine technical specifications
Combustion, construction, operation, power and fuel
Manufacturers: Mercedes-Benz, Renault (including TAG Heuer rebadging until 2018), Ferrari and Red Bull Powertrains (Honda)
Type: Hybrid-powered 4-stroke piston. '4-stroke' may imply Otto-cycle, but it is not required. Atkinson/Miller cycle allowed.
Configuration: V6 single hybrid turbocharger engine
V-angle: 90° cylinder angle
Displacement:
Bore:
Stroke:
Compression ratio: Max 18:1
Valvetrain: DOHC, 24-valve (four valves per cylinder)
Fuel: Minimum 87 (RON+MON)/2 unleaded petroleum + at least 10% "advanced sustainable" Ethanol
Fuel delivery: Petrol direct injection
Maximum fuel injection pressure:
Number of fuel injectors: Max 1 per cylinder.
Fuel flow rate limit: (0.009 x rpm) + 5.5 up to 100 kg/h
Fuel use limit: 110 kg / race
Aspiration: Single-Turbocharger with in-line electric motor/generator (MGU-H)
Power output: About @ 10,500 rpm and higher
Torque: Approx.
Lubrication: Dry sump
Maximum revs: Unlimited (in practice, no engine goes much above 12,000 rpm as efficiency declines)
Engine management: FIA Standard ECU
Max. speed: Approximately (Monza, Baku and Mexico); normal tracks
Mass: Minimum complete
Cooling: Single water pump
Ignition: No more than 5 sparks during Compression and Expansion (Power) cycles
Exhaust systems: Single exhaust with central exit and extra double small exhaust
Forced induction
Turbocharger mass: depending on the turbine housing used
Turbocharger rev limit: 125,000 rpm
Pressure charging: Single-stage compressor and exhaust turbine, common-shaft with MGU-H
Turbo boost pressure: Unlimited but typically absolute
Wastegate: Maximum of two pop-off and two wastegate valves, electronic- or pneumatic-controlled
ERS systems
MGU-K RPM: Max 50,000 rpm, fixed driven/drive ratio by/to the crankshaft
MGU-K power: Max
Energy recovered by MGU-K: Max / lap
Energy received by MGU-K: Max / lap from Energy Store, unlimited from MGU-H
MGU-H RPM: Same as the turbocharger speed. Max 125,000 rpm
Energy recovered by MGU-H: Unlimited
Energy released by MGU-H to drive the turbocharger or MGU-K: Unlimited
Notes:
Figures correct as of the 2024 Abu Dhabi Grand Prix
Bold indicates engine manufacturers that have competed in Formula One in the 2024 season.
World Championship Grand Prix wins by engine manufacturer
Most wins in a season
By number
By percentage
Most consecutive wins
See also
List of Formula One engine manufacturers
Notes
References
External links
Formula One Engines In-depth article covering facts, evolution and tech specs of F1 engines 2009
Racecar Engineering F1 Engines
Engines
Automobile engines
1947 introductions | Formula One engines | [
"Technology"
] | 8,488 | [
"Engines",
"Automobile engines"
] |
11,034,989 | https://en.wikipedia.org/wiki/Trigonometric%20moment%20problem | In mathematics, the trigonometric moment problem is formulated as follows: given a sequence , does there exist a distribution function on the interval such that:
with for . In case the sequence is finite, i.e., , it is referred to as the truncated trigonometric moment problem.
An affirmative answer to the problem means that are the Fourier-Stieltjes coefficients for some (consequently positive) Radon measure on .
Characterization
The trigonometric moment problem is solvable, that is, is a sequence of Fourier coefficients, if and only if the Hermitian Toeplitz matrix
with for ,
is positive semi-definite.
The "only if" part of the claims can be verified by a direct calculation. We sketch an argument for the converse. The positive semidefinite matrix defines a sesquilinear product on , resulting in a Hilbert space
of dimensional at most . The Toeplitz structure of means that a "truncated" shift is a partial isometry on . More specifically, let be the standard basis of . Let and be subspaces generated by the equivalence classes respectively . Define an operator
by
Since
can be extended to a partial isometry acting on all of . Take a minimal unitary extension of , on a possibly larger space (this always exists). According to the spectral theorem, there exists a Borel measure on the unit circle such that for all integer
For , the left hand side is
As such, there is a -atomic measure on , with (i.e. the set is finite), such that
which is equivalent to
for some suitable measure .
Parametrization of solutions
The above discussion shows that the trigonometric moment problem has infinitely many solutions if the Toeplitz matrix is invertible. In that case, the solutions to the problem are in bijective correspondence with minimal unitary extensions of the partial isometry .
See also
Bochner's theorem
Hamburger moment problem
Moment problem
Orthogonal polynomials on the unit circle
Spectral measure
Schur class
Szegő limit theorems
Wiener's lemma
Notes
References
Probability problems
Measure theory
Functional analysis | Trigonometric moment problem | [
"Mathematics"
] | 422 | [
"Functions and mappings",
"Functional analysis",
"Mathematical objects",
"Probability problems",
"Mathematical relations",
"Mathematical problems"
] |
14,695,473 | https://en.wikipedia.org/wiki/Tagged%20architecture | In computer science, a tagged architecture is a type of computer architecture where every word of memory constitutes a tagged union, being divided into a number of bits of data, and a tag section that describes the type of the data: how it is to be interpreted, and, if it is a reference, the type of the object that it points to.
Precursors
Some early systems use tagging of data in memory but do not have all of the characteristics now consider to be part of tagged architectures.
RCA 601
The RCA 601 has a 3-bit tag register and a 3-bit tag for every 24-bit half-word. Every instruction can request a test for equal or unequal tag, and cause a maskable interrupt if the specified match fails. There is no architectural connection between the tag and the contents of the half-word; it is strictly determined by the software.
Burroughs B5000, B5500 and B5700
The Burroughs B5000, B5500 and B5700 have 48-bit words with no appended tag field. However, while there are no tag fields for character, instruction or numeric (floating point) words, all of the control word formats include a 3-bit tag. However, the replacement architecture, starting with the B6500, does have a tag for every word.
Architecture
In contrast, program and data memory are indistinguishable in the von Neumann architecture, making the way the memory is referenced critical to interpret the correct meaning.
Notable examples of American tagged architectures were the Lisp machines, which had tagged pointer support at the hardware and opcode level, the Burroughs B6500 and successors, which have a data-driven tagged and descriptor-based architecture, and the non-commercial Rice Computer. Both the Burroughs and Lisp machine are examples of high-level language computer architectures, where the tagging is used to support types from a high-level language at the hardware level.
In addition to this, the original Xerox Smalltalk implementation used the least-significant bit of each 16-bit word as a tag bit: if it was clear then the hardware would accept it as an aligned memory address while if it was set it was treated as a (shifted) 15-bit integer. Current Intel documentation mentions that the lower bits of a memory address might be similarly used by some interpreter-based systems.
In the Soviet Union, the Elbrus series of supercomputers pioneered the use of tagged architectures in 1973.
See also
Executable-space protection
Harvard architecture
References
Computer architecture | Tagged architecture | [
"Technology",
"Engineering"
] | 527 | [
"Computers",
"Computer engineering",
"Computer architecture"
] |
14,695,501 | https://en.wikipedia.org/wiki/National%20Supercomputing%20Center%20for%20Energy%20and%20the%20Environment | The National Supercomputing Center for Energy and the Environment (NSCEE), is a supercomputing facility housed at UNLV in Las Vegas, Nevada. It was established in 1989 by an act of Congress, PL-101. The facility is used to address a wide variety of scientific studies and applications.
Supercomputers
Silicon Graphics
Sun Microsystems
References
External links
NSCEE
National Science Foundation
University of Nevada, Las Vegas
Supercomputer sites | National Supercomputing Center for Energy and the Environment | [
"Technology"
] | 97 | [
"Computing stubs"
] |
14,695,652 | https://en.wikipedia.org/wiki/Ceramography | Ceramography is the art and science of preparation, examination and evaluation of ceramic microstructures. Ceramography can be thought of as the metallography of ceramics. The microstructure is the structure level of approximately 0.1 to 100 μm, between the minimum wavelength of visible light and the resolution limit of the naked eye. The microstructure includes most grains, secondary phases, grain boundaries, pores, micro-cracks and hardness microindentations. Most bulk mechanical, optical, thermal, electrical and magnetic properties are significantly affected by the microstructure. The fabrication method and process conditions are generally indicated by the microstructure. The root cause of many ceramic failures is evident in the microstructure. Ceramography is part of the broader field of materialography, which includes all the microscopic techniques of material analysis, such as metallography, petrography and plastography. Ceramography is usually reserved for high-performance ceramics for industrial applications, such as 85–99.9% alumina (Al2O3) in Fig. 1, zirconia (ZrO2), silicon carbide (SiC), silicon nitride (Si3N4), and ceramic-matrix composites. It is seldom used on whiteware ceramics such as sanitaryware, wall tiles and dishware.
History
Ceramography evolved along with other branches of materialography and ceramic engineering. Alois de Widmanstätten of Austria etched a meteorite in 1808 to reveal proeutectoid ferrite bands that grew on prior austenite grain boundaries. Geologist Henry Clifton Sorby, the "father of metallography", applied petrographic techniques to the steel industry in the 1860s in Sheffield, England. French geologist Auguste Michel-Lévy devised a chart that correlated the optical properties of minerals to their transmitted color and thickness in the 1880s. Swedish metallurgist J.A. Brinell invented the first quantitative hardness scale in 1900. Smith and Sandland developed the first microindentation hardness test at Vickers Ltd. in London in 1922. Swiss-born microscopist A.I. Buehler started the first metallographic equipment manufacturer near Chicago in 1936. Frederick Knoop and colleagues at the National Bureau of Standards developed a less-penetrating (than Vickers) microindentation test in 1939. Struers A/S of Copenhagen introduced the electrolytic polisher to metallography in 1943. George Kehl of Columbia University wrote a book that was considered the bible of materialography until the 1980s. Kehl co-founded a group within the Atomic Energy Commission that became the International Metallographic Society in 1967.
Preparation of ceramographic specimens
The preparation of ceramic specimens for microstructural analysis consists of five broad steps: sawing, embedding, grinding, polishing and etching. The tools and consumables for ceramographic preparation are available worldwide from metallography equipment vendors and laboratory supply companies.
Sawing
Most ceramics are extremely hard and must be wet-sawed with a circular blade embedded with diamond particles. A metallography or lapidary saw equipped with a low-density diamond blade is usually suitable. The blade must be cooled by a continuous liquid spray.
Embedding
To facilitate further preparation, the sawed specimen is usually embedded (or mounted or encapsulated) in a plastic disc, 25, 32 or 38 mm in diameter. A thermosetting solid resin, activated by heat and compression, e.g. mineral-filled epoxy, is best for most applications. A castable (liquid) resin such as unfilled epoxy, acrylic or polyester may be used for porous refractory ceramics or microelectronic devices. The castable resins are also available with fluorescent dyes that aid in fluorescence microscopy. The left and right specimens in Fig. 3 were embedded in mineral-filled epoxy. The center refractory in Fig. 3 was embedded in castable, transparent acrylic.
Grinding
Grinding is abrasion of the surface of interest by abrasive particles, usually diamond, that are bonded to paper or a metal disc. Grinding erases saw marks, coarsely smooths the surface, and removes stock to a desired depth. A typical grinding sequence for ceramics is one minute on a 240-grit metal-bonded diamond wheel rotating at 240 rpm and lubricated by flowing water, followed by a similar treatment on a 400-grit wheel. The specimen is washed in an ultrasonic bath after each step.
Polishing
Polishing is abrasion by free abrasives that are suspended in a lubricant and can roll or slide between the specimen and paper. Polishing erases grinding marks and smooths the specimen to a mirror-like finish. Polishing on a bare metallic platen is called lapping. A typical polishing sequence for ceramics is 5–10 minutes each on 15-, 6- and 1-μm diamond paste or slurry on napless paper rotating at 240 rpm. The specimen is again washed in an ultrasonic bath after each step. The three sets of specimens in Fig. 3 have been sawed, embedded, ground and polished.
Etching
Etching reveals and delineates grain boundaries and other microstructural features that are not apparent on the as-polished surface. The two most common types of etching in ceramography are selective chemical corrosion, and a thermal treatment that causes relief. As an example, alumina can be chemically etched by immersion in boiling concentrated phosphoric acid for 30–60 s, or thermally etched in a furnace for 20–40 min at in air. The plastic encapsulation must be removed before thermal etching. The alumina in Fig. 1 was thermally etched.
Alternatively, non-cubic ceramics can be prepared as thin sections, also known as petrography, for examination by polarized transmitted light microscopy. In this technique, the specimen is sawed to ~1 mm thick, glued to a microscope slide, and ground or sawed (e.g., by microtome) to a thickness (x) approaching 30 μm. A cover slip is glued onto the exposed surface. The adhesives, such as epoxy or Canada balsam resin, must have approximately the same refractive index (η ≈ 1.54) as glass. Most ceramics have a very small absorption coefficient (α ≈ 0.5 cm −1 for alumina in Fig. 2) in the Beer–Lambert law below, and can be viewed in transmitted light. Cubic ceramics, e.g. yttria-stabilized zirconia and spinel, have the same refractive index in all crystallographic directions and appear, therefore, black when the microscope's polarizer is 90° out of phase with its analyzer.
(Beer–Lambert eqn)
Ceramographic specimens are electrical insulators in most cases, and must be coated with a conductive ~10-nm layer of metal or carbon for electron microscopy, after polishing and etching. Gold or Au-Pd alloy from a sputter coater or evaporative coater also improves the reflection of visible light from the polished surface under a microscope, by the Fresnel formula below. Bare alumina (η ≈ 1.77, k ≈ 10 −6) has a negligible extinction coefficient and reflects only 8% of the incident light from the microscope, as in Fig. 1. Gold-coated (η ≈ 0.82, k ≈ 1.59 @ λ = 500 nm) alumina reflects 44% in air, 39% in immersion oil.
(Fresnel eqn)..
Ceramographic analysis
Ceramic microstructures are most often analyzed by reflected visible-light microscopy in brightfield. Darkfield is used in limited circumstances, e.g., to reveal cracks. Polarized transmitted light is used with thin sections, where the contrast between grains comes from birefringence. Very fine microstructures may require the higher magnification and resolution of a scanning electron microscope (SEM) or confocal laser scanning microscope (CLSM). The cathodoluminescence microscope (CLM) is useful for distinguishing phases of refractories. The transmission electron microscope (TEM) and scanning acoustic microscope (SAM) have specialty applications in ceramography.
Ceramography is often done qualitatively, for comparison of the microstructure of a component to a standard for quality control or failure analysis purposes. Three common quantitative analyses of microstructures are grain size, second-phase content and porosity. Microstructures are measured by the principles of stereology, in which three-dimensional objects are evaluated in 2-D by projections or cross-sections. Microstructures exhibiting heterogeneous grain sizes, with certain grains growing very large, occur in diverse ceramic systems and this phenomenon is known as abnormal grain growth or AGG. The occurrence of AGG has consequences, positive or negative, on mechanical and chemical properties of ceramics and its identification is often the goal of ceramographic analysis.
Grain size can be measured by the line-fraction or area-fraction methods of ASTM E112. In the line-fraction methods, a statistical grain size is calculated from the number of grains or grain boundaries intersecting a line of known length or circle of known circumference. In the area-fraction method, the grain size is calculated from the number of grains inside a known area. In each case, the measurement is affected by secondary phases, porosity, preferred orientation, exponential distribution of sizes, and non-equiaxed grains. Image analysis can measure the shape factors of individual grains by ASTM E1382.
Second-phase content and porosity are measured the same way in a microstructure, such as ASTM E562. Procedure E562 is a point-fraction method based on the stereological principle of point fraction = volume fraction, i.e., Pp = Vv. Second-phase content in ceramics, such as carbide whiskers in an oxide matrix, is usually expressed as a mass fraction. Volume fractions can be converted to mass fractions if the density of each phase is known. Image analysis can measure porosity, pore-size distribution and volume fractions of secondary phases by ASTM E1245. Porosity measurements do not require etching. Multi-phase microstructures do not require etching if the contrast between phases is adequate, as is usually the case.
Grain size, porosity and second-phase content have all been correlated with ceramic properties such as mechanical strength σ by the Hall–Petch equation. Hardness, toughness, dielectric constant and many other properties are microstructure-dependent.
Microindentation hardness and toughness
The hardness of a material can be measured in many ways. The Knoop hardness test, a method of microindentation hardness, is the most reproducible for dense ceramics. The Vickers hardness test and superficial Rockwell scales (e.g., 45N) can also be used, but tend to cause more surface damage than Knoop. The Brinell test is suitable for ductile metals, but not ceramics. In the Knoop test, a diamond indenter in the shape of an elongated pyramid is forced into a polished (but not etched) surface under a predetermined load, typically 500 or 1000 g. The load is held for some amount of time, say 10 s, and the indenter is retracted. The indention long diagonal (d, μm, in Fig. 4) is measured under a microscope, and the Knoop hardness (HK) is calculated from the load (P, g) and the square of the diagonal length in the equations below. The constants account for the projected area of the indenter and unit conversion factors. Most oxide ceramics have a Knoop hardness in the range of 1000–1500 kgf/mm2 (10 – 15 GPa), and many carbides are over 2000 (20 GPa). The method is specified in ASTM C849, C1326 & E384. Microindentation hardness is also called microindentation hardness or simply microhardness. The hardness of very small particles and thin films of ceramics, on the order of 100 nm, can be measured by nanoindentation methods that use a Berkovich indenter.
(kgf/mm2) and (GPa)
The toughness of ceramics can be determined from a Vickers test under a load of 10 – 20 kg. Toughness is the ability of a material to resist crack propagation. Several calculations have been formulated from the load (P), elastic modulus (E), microindentation hardness (H), crack length (c in Fig. 5) and flexural strength (σ). Modulus of rupture (MOR) bars with a rectangular cross-section are indented in three places on a polished surface. The bars are loaded in 4-point bending with the polished, indented surface in tension, until fracture. The fracture normally originates at one of the indentions. The crack lengths are measured under a microscope. The toughness of most ceramics is 2–4 MPa, but toughened zirconia is as much as 13, and cemented carbides are often over 20. The toughness-by-indention methods have been discredited recently and are being replaced by more rigorous methods that measure crack growth in a notched beam in bending.
initial crack length
indention strength in bending
References
Further reading and external links
Expert Guide: Materialography/Metallography, QATM Academy, ATM Qness GmbH, 2022.
Metallographic Preparation of Ceramic and Cermet Materials, Leco Met-Tips No. 19, 2008.
Sample Preparation of Ceramic Material, Buehler Ltd., 1990.
Structure, Volume 33, Struers A/S, 1998, p 3–20.
Struers Metalog Guide
S. Binkowski, R. Paul & M. Woydt, "Comparing Preparation Techniques Using Microstructural Images of Ceramic Materials," Structure, Vol 39, 2002, p 8–19.
R.E. Chinn, Ceramography, ASM International and the American Ceramic Society, 2002, .
D.J. Clinton, A Guide to Polishing and Etching of Technical and Engineering Ceramics, The Institute of Ceramics, 1987.
Digital Library of Ceramic Microstructures, University of Dayton, 2003.
G. Elssner, H. Hoven, G. Kiessler & P. Wellner, translated by R. Wert, Ceramics and Ceramic Composites: Materialographic Preparation, Elsevier Science Inc., 1999, .
R.M. Fulrath & J.A. Pask, ed., Ceramic Microstructures: Their Analysis, Significance, and Production, Robert E. Krieger Publishing Co., 1968, .
K. Geels in collaboration with D.B. Fowler, W-U Kopp & M. Rückert, Metallographic and Materialographic Specimen Preparation, Light Microscopy, Image Analysis and Hardness Testing, ASTM International, 2007, .
H. Insley & V.D. Fréchette, Microscopy of Ceramics and Cements, Academic Press Inc., 1955.
W.E. Lee and W.M. Rainforth, Ceramic Microstructures: Property Control by Processing, Chapman & Hall, 1994.
I.J. McColm, Ceramic Hardness, Plenum Press, 2000, .
Micrograph Center, ASM International, 2005.
H. Mörtel, "Microstructural Analysis," Engineered Materials Handbook, Volume 4: Ceramics and Glasses, ASM International, 1991, p 570–579, .
G. Petzow, Metallographic Etching, 2nd Edition, ASM International, 1999, .
G.D. Quinn, "Indentation Hardness Testing of Ceramics," ASM Handbook, Volume 8: Mechanical Testing and Evaluation, ASM International, 2000, p 244–251, .
A.T. Santhanam, "Metallography of Cemented Carbides," ASM Handbook Volume 9: Metallography and Microstructures, ASM International, 2004, p 1057–1066, .
U. Täffner, V. Carle & U. Schäfer, "Preparation and Microstructural Analysis of High-Performance Ceramics," ASM Handbook Volume 9: Metallography and Microstructures, ASM International, 2004, p 1057–1066, .
D.C. Zipperian, Metallographic Handbook, PACE Technologies, 2011.
Ceramic engineering
Metallurgy
Microscopy
Materials science
Materials testing | Ceramography | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 3,517 | [
"Applied and interdisciplinary physics",
"Metallurgy",
"Materials science",
"Materials testing",
"nan",
"Microscopy",
"Ceramic engineering"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.