id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
23,413,137 | https://en.wikipedia.org/wiki/Saccheri%E2%80%93Legendre%20theorem | In absolute geometry, the Saccheri–Legendre theorem states that the sum of the angles in a triangle is at most 180°. Absolute geometry is the geometry obtained from assuming all the axioms that lead to Euclidean geometry with the exception of the axiom that is equivalent to the parallel postulate of Euclid.
The theorem is named after Giovanni Girolamo Saccheri and Adrien-Marie Legendre. It appeared in Saccheri's 1733 book [Euclid Freed of Every Flaw] but his work fell into obscurity. For many years after the theorem's rediscovery by Legendre it was called Legendre's theorem.
The existence of at least one triangle with angle sum of 180 degrees in absolute geometry implies Euclid's parallel postulate. Similarly, the existence of at least one triangle with angle sum of less than 180 degrees implies the characteristic postulate of hyperbolic geometry.
One proof of the Saccheri–Legendre theorem uses the Archimedean axiom, in the form that repeatedly halving one of two given angles will eventually produce an angle sharper than the second of the two.
Max Dehn gave an example of a non-Legendrian geometry where the angle sum of a triangle is greater than 180 degrees, and a semi-Euclidean geometry where there is a triangle with an angle sum of 180 degrees but Euclid's parallel postulate fails. In these Dehn planes the Archimedean axiom does not hold.
Notes
References
Euclidean geometry
Theorems about triangles
Non-Euclidean geometry | Saccheri–Legendre theorem | Mathematics | 321 |
26,968,209 | https://en.wikipedia.org/wiki/Pisces%20Overdensity | The Pisces Overdensity is a clump of stars in the Milky Way's halo, which may be a disrupted dwarf spheroidal galaxy. It is situated in the Pisces constellation and was discovered in 2009 by analysis of distribution of RR Lyrae stars in the data obtained by the Sloan Digital Sky Survey's data. The galaxy is located at the distance of about 80 kpc from the Sun and moves towards it with a speed of about 75 km/s.
The Pisces Overdensity is one of the faintest satellites of the Milky Way. Its mass is estimated to be at least 105 Solar masses. However it has a large size of about several degrees (around 1 kpc) and may be in a transitional phase between a gravitationally bound galaxy and completely unbound system. The Pisces Overdensity is located near the plane, where the Magellanic Clouds lie. There may exist a connection between the Magellanic Stream and this galaxy.
References
Local Group
Dwarf spheroidal galaxies
Pisces (constellation)
Milky Way Subgroup | Pisces Overdensity | Astronomy | 222 |
2,393,371 | https://en.wikipedia.org/wiki/Quazepam | Quazepam, sold under the brand name Doral among others, is a relatively long-acting benzodiazepine derivative drug developed by the Schering Corporation in the 1970s. Quazepam is used for the treatment of insomnia, including sleep induction and sleep maintenance. Quazepam induces impairment of motor function and has relatively (and uniquely) selective hypnotic and anticonvulsant properties with considerably less overdose potential than other benzodiazepines (due to its novel receptor-subtype selectivity). Quazepam is an effective hypnotic which induces and maintains sleep without disruption of the sleep architecture.
It was patented in 1970 and came into medical use in 1985.
Medical uses
Quazepam is used for short-term treatment of insomnia related to sleep induction or sleep maintenance problems and has demonstrated superiority over other benzodiazepines, such as temazepam. It had a lower incidence of side effects than temazepam, including less sedation, amnesia, and motor impairment. Usual dosage is 7.5 to 15 mg orally at bedtime.
Quazepam is effective as a premedication prior to surgery.
Side effects
Quazepam has fewer side effects than other benzodiazepines and less potential to induce tolerance and rebound effects. There is significantly less potential for quazepam to induce respiratory depression or to adversely affect motor coordination than other benzodiazepines. The different side effect profile of quazepam may be due to its more selective binding profile to type 1 benzodiazepine receptors.
Ataxia
Daytime somnolence
Hypokinesia
Cognitive and performance impairments
In September 2020, the U.S. Food and Drug Administration (FDA) required the boxed warning be updated for all benzodiazepine medicines to describe the risks of abuse, misuse, addiction, physical dependence, and withdrawal reactions consistently across all the medicines in the class.
Tolerance and dependence
Tolerance may occur to quazepam, but more slowly than seen with other benzodiazepines such as triazolam. Quazepam causes significantly less drug tolerance and withdrawal symptoms including less rebound insomnia upon discontinuation compared to other benzodiazepines. Quazepam may cause less rebound effects than other type1 benzodiazepine receptor selective nonbenzodiazepine drugs due to its longer half-life. Short-acting hypnotics often cause next-day rebound anxiety. Quazepam, due to its pharmacological profile, does not cause next-day rebound withdrawal effects during treatment.
No firm conclusions can be drawn, however, about whether long-term use of quazepam does not produce tolerance, as few, if any, long-term clinical trials extending beyond 4 weeks of chronic use have been conducted. Quazepam should be withdrawn gradually if used beyond 4 weeks of use to avoid the risk of a severe benzodiazepine withdrawal syndrome developing. Very high dosage administration over prolonged periods of time, up to 52 weeks, of quazepam in animal studies provoked severe withdrawal symptoms upon abrupt discontinuation, including excitability, hyperactivity, convulsions, and the death of two of the monkeys due to withdrawal-related convulsions. More monkeys died however, in the diazepam-treated monkeys. In addition, it has now been documented in the medical literature that one of the major metabolites of quazepam, N-desalkyl-2-oxoquazepam (N-desalkylflurazepam), which is long-acting and prone to accumulation, binds unselectively to benzodiazepine receptors, thus quazepam may not differ all that much pharmacologically from other benzodiazepines.
Special precautions
Benzodiazepines require special precaution if used during pregnancy, in children, alcohol, or drug-dependent individuals, and individuals with comorbid psychiatric disorders.
Quazepam and its active metabolites are excreted into breast milk.
Accumulation of one of the active metabolites of quazepam, N-desalkylflurazepam, may occur in the elderly. A lower dose may be required for the elderly.
Elderly
Quazepam is more tolerable for elderly patients compared to flurazepam due to its reduced next-day impairments. However, another study showed marked next-day impairments after repeated administration due to the accumulation of quazepam and its long-acting metabolites. Thus, the medical literature shows conflicts on quazepam's side effect profile. A further study showed significant balance impairments combined with an unstable posture after administration of quazepam in test subjects.
An extensive review of the medical literature regarding the management of insomnia and the elderly found that there is considerable evidence of the effectiveness and durability of non-drug treatments for insomnia in adults of all ages and that these interventions are underutilized. Compared with the benzodiazepines, including quazepam, the nonbenzodiazepine sedative/hypnotics appeared to offer few, if any, significant clinical advantages in efficacy or tolerability in elderly persons. It was found that newer agents with novel mechanisms of action and improved safety profiles, such as melatonin agonists, hold promise for the management of chronic insomnia in elderly people. Long-term use of sedative/hypnotics for insomnia lacks an evidence base and has traditionally been discouraged for reasons that include concerns about such potential adverse drug effects as cognitive impairment (anterograde amnesia), daytime sedation, motor incoordination, and increased risk of motor vehicle accidents and falls. In addition, the effectiveness and safety of long-term use of these agents remain to be determined. It was concluded that more research is needed to evaluate the long-term effects of treatment and the most appropriate management strategy for elderly people with chronic insomnia.
Interactions
The absorption rate is likely to be significantly reduced if quazepam is taken in a fasted state, reducing the hypnotic effect of quazepam. If 3 or more hours have passed since eating food, then some food should be eaten before taking quazepam.
Pharmacology
Quazepam is a trifluoroalkyl type of benzodiazepine. Quazepam is unique amongst benzodiazepines in that it selectively targets the GABAA α1 subunit receptors, which are responsible for inducing sleep. Its mechanism of action is very similar to zolpidem and zaleplon in its pharmacology and can successfully substitute for zolpidem and zaleplon in animal studies.
Quazepam is selective for type I benzodiazepine receptors containing the α1 subunit, similar to other drugs such as zaleplon and zolpidem. As a result, quazepam has little or no muscle-relaxant properties. Most other benzodiazepines are unselective and bind to type1 GABAA receptors and type2 GABAA receptors. Type1 GABAA receptors include the α1 subunit containing GABAA receptors, which are responsible for the hypnotic properties of the drug. Type 2 receptors include the α2, α3 and α5 subunits, which are responsible for anxiolytic action, amnesia, and muscle relaxant properties. Thus, quazepam may have less side effects than other benzodiazepines, but, it has a very long half-life of 25 hours, which reduces its benefits as a hypnotic due to likely next day sedation. It also has two active metabolites with half-lives of 28 and 79 hours. Quazepam may also cause less drug tolerance than other benzodiazepines such as temazepam and triazolam, perhaps due to its subtype selectivity. The longer half-life of quazepam may have the advantage, however, of causing less rebound insomnia than shorter-acting subtype selective nonbenzodiazepines. However, one of the major metabolites of quazepam, the N-desmethyl-2-oxoquazepam (aka N-desalkylflurazepam), binds unselectively to both type1 and type2 GABAA receptors. The N-desmethyl-2-oxoquazepam metabolite also has a very long half-life and likely contributes to the pharmacological effects of quazepam.
Pharmacokinetics
Quazepam has an absorption half-life of 0.4 hours with a peak in plasma levels after 1.75 hours. It is eliminated both renally and through feces. The active metabolites of quazepam are 2-oxoquazepam and N-desalkyl-2-oxoquazepam. The N-desalkyl-2-oxoquazepam metabolite has only limited pharmacological activity compared to the parent compound quazepam and the active metabolite 2-oxoquazepam. Quazepam and its major active metabolite 2-oxoquazepam both show high selectivity for the type1 GABAA receptors. The elimination half-life range of quazepam is between 27 and 41 hours.
Mechanism of action
Quazepam modulates specific GABAA receptors via the benzodiazepine site on the GABAA receptor. This modulation enhances the actions of GABA, causing an increase in the opening frequency of the chloride ion channel, which results in an increased influx of chloride ions into the GABAA receptors. Quazepam, unique amongst benzodiazepine drugs, selectively targets type 1 benzodiazepine receptors, which results in reduced sleep latency and promotion of sleep. Quazepam also has some anticonvulsant properties.
EEG and sleep
Quazepam has potent sleep-inducing and sleep-maintaining properties. Studies in both animals and humans have demonstrated that EEG changes induced by quazepam resemble normal sleep patterns, whereas other benzodiazepines disrupt normal sleep. Quazepam promotes slow-wave sleep. This positive effect of quazepam on sleep architecture may be due to its high selectivity for type 1 benzodiazepine receptors, as demonstrated in animal and human studies. This makes quazepam unique in the benzodiazepine family of drugs.
Drug misuse
Quazepam is a drug with the potential for misuse. Two types of drug misuse can occur: either recreational misuse, where the drug is taken to achieve a high, or when the drug is continued long term against medical advice.
References
Benzodiazepines
Sedatives
Hypnotics
Anticonvulsants
Muscle relaxants
Chloroarenes
2-Fluorophenyl compounds
Thioamides
Trifluoromethyl compounds | Quazepam | Chemistry,Biology | 2,343 |
42,732,578 | https://en.wikipedia.org/wiki/Defensive%20expenditures | In environmental accounting, defensive expenditures are expenditures that seek to minimise potential damage to oneself. Examples include defence and insurance.
References
Risk management
Actuarial science
Environmental economics
Expenditure | Defensive expenditures | Mathematics,Environmental_science | 36 |
44,595,512 | https://en.wikipedia.org/wiki/Behavioural%20design | Behavioural design is a sub-category of design, which is concerned with how design can shape, or be used to influence human behaviour. All approaches of design for behaviour change acknowledge that artifacts have an important influence on human behaviour and/or behavioural decisions. They strongly draw on theories of behavioural change, including the division into personal, behavioural, and environmental characteristics as drivers for behaviour change. Areas in which design for behaviour change has been most commonly applied include health and wellbeing, sustainability, safety and social context, as well as crime prevention.
History
Design for behaviour change developed from work on design psychology (also: behavioural design) conducted by Don Norman in the 1980s. Norman’s ‘psychology of everyday things’ introduced concepts from ecological psychology and human factors research to designers, such as affordances, constraint feedback and mapping. They have provided guiding principles with regard to user experience and the intuitive use of artefacts, although this work did not yet focus specifically on influencing behavioural change.
The models that followed Norman’s original approach became more explicit about influencing behaviour, such as emotion design and persuasive technology. Perhaps since 2005, a greater number of theories have developed that explicitly address design for behaviour change. These include a diversity of theories, guidelines and toolkits for behaviour change (discussed below) covering the different domains of health, sustainability, safety, crime prevention and social design.
With the emergence of the notion of behaviour change, a much more explicit discussion has also begun about the deliberate influence of design although a review of this area from 2012 has identified that a lack of common terminology, formalized research protocols and target behaviour selection are still key issues. Key issues are the situations in which design for behaviour change could or should be applied; whether its influence should be implicit or explicit, voluntary or prescriptive; and of the ethical consequences of one or the other.
Issues of behaviour change
In 1969, Herbert Simon's understanding of design as "devising courses of action to change existing situations into preferred ones" acknowledged its capacity to create change. Since then, the role of design in influencing human behaviour has become much more widely acknowledged. It is further recognised that design in its various forms, whether as objects, services, interiors, architecture and environments, can create change that is both desirable as well as undesirable, intentional and unintentional.
Desirable and undesirable effects are often closely intertwined whereby the first is usually intentionally designed, while the latter might be an unintentional effect. For example, the impact of cars has been profound in enhancing social mobility on the one hand, while transforming cities and increasing resource demand and pollution on the other. The first is generally regarded as a positive effect. The impact of associated road building on cities, however, has largely had a detrimental impact on the living environment. Furthermore, resource use and pollution associated with cars and their infrastructure have prompted a rethinking of human behaviour and the technology used, as part of the sustainable design movement, resulting for example in schemes promoting less travel or alternative transport such as trains and bike riding. Similar effects, sometimes desirable, sometimes undesirable can be observed in other areas including health, safety and social spheres. For example, mobile phones and computers have transformed the speed and social code of communication, leading not only to an increased ability to communicate, but also to an increase in stress levels with a wide range of health impacts and to safety issues.
Taking lead from Simon, it could be argued that designers have always attempted to create "preferable" situations. However, recognising the important but not always benevolent role of design, Jelsma emphasises that designers need to take moral responsibility for the actions which take place with artefacts as a result of humans interactions:
In response, design for behaviour change acknowledges this responsibility and seeks to put ethical behaviour and goals higher on the agenda. To this end, it seeks to enable consideration for the actions and services associated with any design, and the consequences of these actions, and to integrate this thinking into the design process.
Approaches
To enable the process of behavioural change through design, a range of theories, guidelines and tools have been developed to promote behaviour change that encourages pro-environmental and social actions and lifestyles from designers as well as user.
Theories
Persuasive technology: how computing technologies can be used to influence or change the performance of target behaviours or social responses.
Research at Loughborough Design School which collectively draws on behavioural economics, using mechanisms such as feedback, constraints and affordances and persuasive technology, to promote sustainable behaviours.
Design for healthy behaviour: drawing on the trans-theoretical model, this model offers a new framework to design for healthy behaviour, which contends that designers need to consider the different stages of decision making which people go through to durably change their behaviour.
Mindful design: based on Langer's theory of mindfulness mindful design seeks to encourage responsible user action and choice. Mindful design seeks to achieve responsible action through raising critical awareness of the different options available in any one situation.
Socially responsible design: this framework or map takes the point of the intended user experience, which distinguishes four categories of product influences: decisive, coercive, persuasive and seductive to encourage desirable and discourage undesirable behaviour.
Community based social marketing with design: this model seeks to intervene in shared social practices by reducing barriers and amplifying any benefits. To facilitate change, the approach draws on psychological tools such as prompts, norms, incentives, commitments, communication and the removal of barriers. Online social marketing emerged out of traditional social marketing, with a focus on developing scalable digital behavior change interventions.
Practice orientated product design: This applies the understanding of social practice theory – that material artefacts (designed stuff) influence the trajectory of everyday practices – to design. It does so on the premise that this will ultimately shift everyday practices over time
Modes of transitions framework: The framework draws on human-centered design methods to analyze and comprehend transitions as a way for designers to understand people that go through a process of change (a transition). It combines these with scenario-based design to provide a means of action.
Critical discussion
Design for behaviour change is an openly value-based approach that seeks to promote ethical behaviours and attitudes within social and environmental contexts.
This raises questions about whose values are promoted and to whose benefit. While intrinsically seeking to promote socially and environmentally ethical practices, there are two possible objections:
The first is that such approaches can be seen a paternalistic, manipulative and disenfranchising where decisions about the environment are being made by one person or group for another with or without consultation.
The second objection is that this approach can be abused, for example in that apparently positive goals of behaviour change might be made simply to serve commercial gain without regard for the envisaged ethical concerns.
The debate about the ethical considerations of design for behaviour change is still emerging, and will develop with the further development of the field.
When designing for behavior change, the misapplication of behavioral design can trigger backfires, when they accidentally increasing the bad behavior they were originally designed to reduce. Given the stigma of triggering bad outcomes, researchers believe that persuasive backfires effects are common but rarely published, reported, or discussed.
Artificial intelligence in behavior change
The use of 3rd wave AI techniques to achieve behavior change, intensifies the debate over behavior change. These technologies are more effective than previous techniques, but like AI in other fields it is also more opaque to both users and designers. As the field of behavioral design continues to evolve, the role of AI is becoming increasingly prominent, offering new opportunities to create desirable behavioral outcomes across various contexts. In healthcare, innovative methods like PROLIFERATE_AI exemplify a powerful approach to influencing human behavior in targeted and measurable ways. These strategies leverage AI-driven and person-centered feedback mechanisms, such as participatory design, to enhance the evaluation and implementation of health innovations.
See also
Behavior modification
Person–situation debate
Structural fix
Systems thinking
Workaround
References
Design
Attitude change
Product design | Behavioural design | Engineering | 1,648 |
1,122,262 | https://en.wikipedia.org/wiki/Hauppauge%20Computer%20Works | Hauppauge Computer Works ( ) is a US manufacturer and marketer of electronic video hardware for personal computers. Although it is most widely known for its WinTV line of TV tuner cards for PCs, Hauppauge also produces personal video recorders, digital video editors, digital media players, hybrid video recorders and digital television products for both Windows and Mac. The company is named after the hamlet of Hauppauge, New York, in which it is based.
In addition to its headquarters in New York, Hauppauge also has sales and technical support offices in France, Germany, the Netherlands, Sweden, Italy, Poland, Australia, Japan, Singapore, Indonesia, Taiwan, Spain and the UK.
Company history
Hauppauge was co-founded by Kenneth Plotkin and Kenneth Aupperle, and became incorporated in 1982.
Starting in 1983, the company followed Microway, the company that a year earlier provided the software needed by scientists and engineers to modify the IBM PC Fortran compiler so that it could transparently employ Intel 8087 coprocessors. The 80-bit Intel 8087 math coprocessor ran a factor of 50 faster than the 8/16-bit 8088 CPU that the IBM PC software came with. However, in 1982, the speed-up in floating-point-intensive applications was only a factor of 10 as the initial software developed by Microway and Hauppauge continued to call floating point libraries to do computations instead of placing inline x87 instructions inline with the 8088's instructions that allowed the 8088 to drive the 8087 directly. By 1984, inline compilers made their way into the market providing increased speed ups. Hauppauge provided similar software products in competition with Microway that they bundled with math coprocessors and remained in the Intel math coprocessor business until 1993 when the Intel Pentium came out with a built-in math coprocessor. However, like other companies that entered the math coprocessor business, Hauppauge produced other products that contributed to a field that is today called HPC - high-performance computing.
The math coprocessor business rapidly expanded starting in 1984 with software products that accelerated applications like Lotus 1-2-3. At the same time the advent of the 80286 based IBM PC/AT with its 80287 math coprocessor provided new opportunities for companies that had grown up selling 8087s and supporting software. This included products like Hauppauge's 287 Fast/5, a product that took advantage of the 80287's design that used an asynchronous clock to drive its FPU at 5 MHz instead of the 4 MHz clocking provided by IBM, making it possible for the 80287s that came with the AT to be overclocked to 12 MHz.
By 1987, math coprocessors had become Intel's most profitable product line bringing in competition from vendors like Cyrix whose first product was a math coprocessor faster than the new Intel 80387, but whose speed was stalled by the 80386 that acted as a governor. This is when Andy Grove decided it was time for Intel to recapture its channel to market opening up a division to compete with its math coprocessor customers that by this time included 47th Street Camera. The new Intel division, PCEO (the PC Enhancement Operation) came out with a product called "Genuine Intel Math Coprocessors". After playing around in the accelerator board business PCEO would settle down in the 80386 motherboard business originally selling a motherboard designed by one of its engineers as a home project that eventually ended up with a new division that today sells 40% of the motherboards used in high end PCs that find their way into products including Supercomputers, medical products, etc.
Companies like Hauppauge and Microway that were impacted by their new competitor that made their living accelerating floating point applications being run on PCs followed suit by venturing into the Intel i860 vector coprocessor business: Hauppauge came out with an Intel 80486 motherboard that included an Intel i860 vector processor while Microway came out with add-in cards that had between one or more i860s. These products along with transputer-based add-in cards would eventually lead into what became known as HPC (high performance computing). HPC was actually initiated in 1986 by an English company, Inmos, that designed a CPU competitive with an Intel 80386/387 that also included four twisted pair high-speed interconnects that could communicate with other transputers and be linked to a PC motherboard making it possible to create distributed memory processing computers that could employ 32 processors with the same throughput as 32 Intel 386/387s operating in a single PC. The add-in card parallel processing business morphed from the transputer to the Intel i860 around 1989 when Inmos was purchased by STMicroelectronics that cut R&D funding eventually forcing companies that had entered the parallel processing business to shift to the Intel i860. The i860 was a vector processor with graphics extensions that could initially provide 50 megaflops of throughput in an era when an 80486 with an Intel 80487 peaked at half a megaflop and would eventually top out at 100 Megaflops making it as fast as 100 Inmos T414 transputers. Intel i860 add-in cards made it possible for as many as 20 Intel i860s to run in parallel and could be programmed using a software library similar to today's MPI libraries which today support distributed memory parallel processing in which servers sitting in 1U rack mount chassis that are essentially PCs provide the horsepower behind the majority of the world's supercomputers. This same approach could be employed using Hauppauge's motherboards connected by Gigabit Ethernet, something that was however first demonstrated using a wall of IBM RS/6000 PCs at the 1991 Supercomputing Conference. IBM's lead was quickly followed by academic users who realized they could do the same thing with much less expensive hardware by adapting their x86 PCs to run in parallel at first using a software library adapted from similar transputer libraries called PVM (parallel virtual machines) that would eventually morph into today's MPI. Products like the Intel i860 vector processor that could be employed both as a vector and graphics processor were end of life'd around 1993 at the same time that Intel introduced the Intel Pentium P5: a CISC processor that used CISC instructions that were pipelined into hard coded lower level RISC like primitives that provided the Pentium with a Superscalar architecture that also could execute the x87 FPU instruction set using a built in FPU that was essentially implemented using the scalar instructions of the i860 as well as a memory bus that provided a 400 MB/sec interface to memory that was borrowed from the i860 as well. This high speed bus played a crucial role in speeding up the most common floating point intensive applications that at this point in time used Gauss Elimination to solve simultaneous linear equations buy which today are solved using blocking and LU decomposition. The Intel Pentium, while good, did not provide enough floating point performance to compete with a 300 MHz DEC Alpha 21164 that provided 600 Megaflops in 1995. At the same point in time, Intel supercomputing had moved from the 50 MHz Intel i860XP that was six times slower than the Alpha 21164 to the special version of their Pentium that at 200 megaflops was only three times slower than the 21164. However, the impending speed upgrade of the Alpha to 600 MHz ultimately doomed the future of Intel supercomputing.
Motherboards
During the late 1980s and early 1990s, Hauppauge produced motherboards for Intel 486 processors. A number of these motherboards were standard ISA built to fairly competitive price points. Some, however, were workstation and server-oriented, including EISA support, optional cache memory modules, and support for the Weitek 4167 FPU.
Hauppauge also sold a unique motherboard, the Hauppauge 4860. This was the only standard PC/AT motherboard ever made with both an Intel 80486 and an Intel i860 processor (optional). While both required the 80486, the i860 could either run an independent lightweight operating system or serve as a more conventional co-processor.
Hauppauge no longer produces motherboards, focusing instead on the TV card market.
Product lines
Digital Terrestrial/Satellite
Hauppauge digital terrestrial and satellite products capture DVB-T and DVB-S broadcasts respectively without the need to re-encode the streams. There are several benefits from this approach:
the cost of the TV card can be lower because there is no need to supply an MPEG-2 encoder
the quality of captures can be higher because there is no need to re-encode streams
ratio of file size to quality is higher due to the broadcasters' high-efficiency encoders
Until August 2004 all of Hauppauge's DVB products were badge-engineered TechnoTrend products. The first of the new Hauppauge-designed cards was the Nova-t PCI 90002 and the silent replacement of the TechnoTrend model caused confusion and anger among Hauppauge's customers who found that the new card didn't support TechnoTrend's proprietary interfaces. This rendered any existing third-party software unusable with the new cards. The new cards also came with a software packaged called WinTV2000 which lacked features that TechnoTrend's software had including seven-day EPG, Digital Teletext and LCN-based channel ordering. The new cards supported Microsoft's BDA standard but at the time this was at its infancy and very few 3rd party applications included support for it.
By 2005, all of the TechnoTrend products had been removed from the Hauppauge lineup, with the exception of the DEC2000-t and DEC3000-s which haven't seen a replacement.
Hybrid Video Recorders
The Hybrid Video Recorder (HVR) range capture a combination of different broadcast types. The majority of Hauppauge HVR models capture analogue PAL and DVB-T but there have been some more recent models which capture analogue NTSC and ATSC as well as a tri-mode card which supports analogue PAL, DVB-S and DVB-T.
HVR-9xx devices are bus-powered USB 2.0 sticks, not much larger than a USB flash drive. They have support for analogue and digital terrestrial TV. The HVR-9xx sticks are produced in Taiwan by Deltron, and are also sold for Apple computers by Elgato under the EyeTV brand.
HVR-1xxx devices are PCI-based products that receive analogue and digital terrestrial TV. They are similar to the HVR-9xx but have support for NICAM or dbx Stereo for analogue terrestrial on all models.
HVR-3xxx and 4xxx devices are tri-mode and quad-mode devices respectively. Tri-mode means support for analogue terrestrial/cable, digital terrestrial and DVB-S digital satellite. Quad-mode devices additionally support DVB-S2 HD digital satellite. The HVR-4000 marks a change in bundled applications in that instead of using Hauppauge's WinTV2000 package, it ships with Cyberlink PowerCinema.
Personal Video Recorders
The Personal Video Recorder (PVR) range uses an on-board MPEG/MPEG-2 encoder to compress the incoming analogue TV signals. The benefits of using a hardware encoder include lower CPU usage when encoding live TV.
The first WinTV-PVR product was the WinTV PVR-PCI, launched in late 2000 and not receiving any driver updates since February 2002. It was joined by the WinTV PVR-USB, which has two variants. The first variant supported MPEG-2 streams up to 6 Mbit/s and supported Half-D1 resolutions (320 × 480). This was replaced by an updated model supporting up to 12 Mbit/s streams and Full-D1 resolution (720 × 480).
The first WinTV-PVR to gain popularity was the PVR-250. The original version of the PVR-250 was a variant of the Sag Harbor (PVR-350) which used the ivac15 chipset. Although the chipset was able to do hardware decoding the video out components were not included on the card. In later versions of the PVR-250 the ivac15 was replaced with the ivac16 to reduce cost and to relieve heat issues. The PVR-250 and PVR-350 were joined by the USB 2.0 PVR-USB2 to complete their generation of devices.
Their successors, the PVR-150 and PVR-500, were released alongside the PVR-250/350/USB2 and while popular with both OEMs and the general public, there have been numerous driver issues as well as video quality complaints. The PVR-500 was released as a Media Center card and wasn't supplied with Hauppauge's WinTV2000 software. It was effectively two PVR-150s on a single board, connected via a PCI-PCI bridge chip. The PVR-USB2 was silently replaced with the PVR-USB2+ which is identical both visually and terms of features, but uses a Conexant chipset rather than the Philips chipset in the old model.
From its name and time of release, the PVR-160 appears to be newer than the PVR-150 but it is not. The PVR-160 is a repackaging of the WinTV Roslyn. The Roslyn is based on the Conexant Blackbird design and uses the CX2388x video decoder. This board was originally available only to OEMs and third-party software vendors such as Frey Technologies (SageTV) and Snapstream (BeyondTV). The board was sold under many names including the PVR-250BTV (Snapstream). This card is known to have color and brightness issues that can be corrected somewhat using registry hacks. Hauppauge received a large surplus amount of these cards from OEM and third-party vendors. The cards were repackaged with an MCE remote and receiver and rebranded the PVR-160. The PVR-160 was often mistakenly referred to as the PVR-250MCE but is not related to the PVR-250.
High-Definition Personal Video Recorder
In May 2008, Hauppauge released the HD-PVR, a USB 2.0 device with an on-board H.264 hardware encoder for recording from high-definition sources through component inputs. It is the world's first USB device that can capture in high definition. The HD-PVR has proved to be a very popular device, and Hauppauge has been updating its drivers and software continually since its release. In addition to being able to capture from any component video source in 480p, 720p, or 1080i, the HD-PVR comes with an IR blaster that communicates with your cable or satellite set-top box for automated program recordings and channel-changing capabilities. In 2012, Hauppauge released the HD-PVR Gaming Edition 2, which features a much smaller design than its predecessor along with 1080p HDMI support. The PVR is not officially supported on Macintosh systems, but a variety of third-party programs exist that allow it to function on OS X, including EyeTV by Elgato and HDPVRCapture. In 2013, Hauppauge released an upgrade for the existing HD-PVR 2 with the HD-PVR 2 Gaming Edition Plus, which supports Macintosh systems.
WinTV Analogue
The standard analogue range of products use software encoding for recording analogue TV. The more recent Hauppauge cards use SoftPVR, which allows MPEG and MPEG-2 encoding in software provided that a sufficiently fast CPU is installed in the system.
MediaMVP
The MediaMVP is a thin client device that displays music, video and pictures (hence "MVP") on a television. It is based on an IBM PowerPC RISC processor specialized for multimedia decoding. The operating system is a form of Linux, and everything (including the menus) is served to the device via Ethernet or, on newer devices, 802.11g wireless LAN from the server PC.
Various open source software products can use the device as a front-end. An example is MVPMC, which allows the MediaMVP to be used as a front-end for MythTV or ReplayTV.
Table of products
WinTV software
Hauppauge's principal software offering is WinTV, a TV tuning, viewing, and recording application supplied on a CD-ROM included with tuner hardware. A previous version was called WinTV2000 (WinTV32 without skins). It had companion applications, including WinTV Scheduler, which performs timed recordings, and WinTV Radio, which receives FM radio. It was modified towards a service-based software package, with card management and recordings taken care of by the "TV Server" service and EPG data collection by the "EPG Service", allowing WinTV2000 to work with multiple Hauppauge tuners in the same PC.
In 2007 Hauppauge launched WinTV Version 6, followed in 2009 by WinTV7. WinTV8 was current . WinTV updates are available without charge to Hauppauge tuner users (major updates require access to a qualifying earlier WinTV installation CD, e.g. WinTV8 requires a CD not earlier than WinTV7). An option available at extra cost, WinTV Extend, allows TV to be streamed over the Internet to several portable devices such as smartphones, and PCs.
Wing
"Wing", a supplemental software application from Hauppauge, allows the company's PVR products to convert MPEG recordings into formats suitable for playback on the Apple iPod, Sony PSP or a DivX player; it converts MPEG-2 videos into H.264, MPEG-4 and DivX.
Third-party software
Third-party programs which support Hauppauge tuners include: GB-PVR, InterVideo WinDVR, Snapstream's Beyond TV, SageTV, Windows Media Center and the Linux-based MythTV.
Linux
Hauppauge offers limited support for Linux, with Ubuntu repositories and firmware downloads available on its website. There are drivers available from non-Hauppauge sources for most of the company's cards (in IVTV and LinuxTV). It appears that some of these drivers (Nova and HVR) are written by a Hauppauge engineer.
The PVR-150 captures video on Linux, but there are reportedly difficulties getting the remote control and IR blaster to work. Also, a January 2007 product substitution of HVR-1600 in PVR-150 retail boxes forced many Linux users to exchange their purchases because the Linux driver has not been updated for the HVR-1600.
SageTV Media Center for Linux supports PVR-150, PVR-250, PVR-350, PVR-500 and MediaMVP.
For ATSC and DVB applications, a list of Linux supported Hauppauge and other makes of TV cards can be found on the LinuxTVWiki page (see "Supported Hardware" section).
External links
Hauppauge UK
Hauppauge UK Support Forum
PCTV Systems
SageTV (a vendor of products based on Hauppauge hardware)
SHS-PVR Unofficial WinTV-PVR & MediaMVP forums
usbvision (partially functional Linux driver for WinTV-USB)
The Hauppauge 4860 Motherboard in Detail
WinTV-PVR Family Identification
References
1983 establishments in New York (state)
American companies established in 1983
Digital video recorders
Islip (town), New York
Smithtown, New York
Computer companies of the United States | Hauppauge Computer Works | Technology | 4,229 |
15,732,918 | https://en.wikipedia.org/wiki/The%20eclipse%20of%20Darwinism | Julian Huxley used the phrase "the eclipse of Darwinism" to describe the state of affairs prior to what he called the "modern synthesis". During the "eclipse", evolution was widely accepted in scientific circles but relatively few biologists believed that natural selection was its primary mechanism. Historians of science such as Peter J. Bowler have used the same phrase as a label for the period within the history of evolutionary thought from the 1880s to around 1920, when alternatives to natural selection were developed and explored—as many biologists considered natural selection to have been a wrong guess on Charles Darwin's part, or at least to be of relatively minor importance.
Four major alternatives to natural selection were in play in the 19th century:
Theistic evolution, the belief that God directly guided evolution
Neo-Lamarckism, the idea that evolution was driven by the inheritance of characteristics acquired during the life of the organism
Orthogenesis, the belief that organisms were affected by internal forces or laws of development that drove evolution in particular directions
Mutationism, the idea that evolution was largely the product of mutations that created new forms or species in a single step.
Theistic evolution had largely disappeared from the scientific literature by the end of the 19th century as direct appeals to supernatural causes came to be seen as unscientific. The other alternatives had significant followings well into the 20th century; mainstream biology largely abandoned them only when developments in genetics made them seem increasingly untenable, and when the development of population genetics and the modern synthesis demonstrated the explanatory power of natural selection. Ernst Mayr wrote that as late as 1930 most textbooks still emphasized such non-Darwinian mechanisms.
Context
Evolution was widely accepted in scientific circles within a few years after the publication of On the Origin of Species, but there was much less acceptance of natural selection as its driving mechanism. Six objections were raised to the theory in the 19th century:
The fossil record was discontinuous, suggesting gaps in evolution.
The physicist Lord Kelvin calculated in 1862 that the Earth would have cooled in 100 million years or less from its formation, too little time for evolution.
It was argued that many structures were nonadaptive (functionless), so they could not have evolved under natural selection.
Some structures seemed to have evolved on a regular pattern, like the eyes of unrelated animals such as the squid and mammals.
Natural selection was argued not to be creative, while variation was admitted to be mostly not of value.
The engineer Fleeming Jenkin correctly noted in 1868, reviewing The Origin of Species, that the blending inheritance favoured by Charles Darwin would oppose the action of natural selection.
Both Darwin and his close supporter Thomas Henry Huxley freely admitted, too, that selection might not be the whole explanation; Darwin was prepared to accept a measure of Lamarckism, while Huxley was comfortable with both sudden (mutational) change and directed (orthogenetic) evolution.
By the end of the 19th century, criticism of natural selection had reached the point that in 1903 the German botanist, , edited a series of articles intended to show that "Darwinism will soon be a thing of the past, a matter of history; that we even now stand at its death-bed, while its friends are solicitous only to secure for it a decent burial." In 1907, the Stanford University entomologist Vernon Lyman Kellogg, who supported natural selection, asserted that "... the fair truth is that the Darwinian selection theory, considered with regard to its claimed capacity to be an independently sufficient mechanical explanation of descent, stands today seriously discredited in the biological world." He added, however, that there were problems preventing the widespread acceptance of any of the alternatives, as large mutations seemed too uncommon, and there was no experimental evidence of mechanisms that could support either Lamarckism or orthogenesis. Ernst Mayr wrote that a survey of evolutionary literature and biology textbooks showed that as late as 1930 the belief that natural selection was the most important factor in evolution was a minority viewpoint, with only a few population geneticists being strict selectionists.
Motivation for alternatives
A variety of different factors motivated people to propose other evolutionary mechanisms as alternatives to natural selection, some of them dating back before Darwin's Origin of Species. Natural selection, with its emphasis on death and competition, did not appeal to some naturalists because they felt it was immoral, and left little room for teleology or the concept of progress in the development of life. Some of these scientists and philosophers, like St. George Jackson Mivart and Charles Lyell, who came to accept evolution but disliked natural selection, raised religious objections. Others, such as Herbert Spencer, the botanist George Henslow (son of Darwin's mentor John Stevens Henslow, also a botanist), and Samuel Butler, felt that evolution was an inherently progressive process that natural selection alone was insufficient to explain. Still others, including the American paleontologists Edward Drinker Cope and Alpheus Hyatt, had an idealist perspective and felt that nature, including the development of life, followed orderly patterns that natural selection could not explain.
Another factor was the rise of a new faction of biologists at the end of the 19th century, typified by the geneticists Hugo DeVries and Thomas Hunt Morgan, who wanted to recast biology as an experimental laboratory science. They distrusted the work of naturalists like Darwin and Alfred Russel Wallace, dependent on field observations of variation, adaptation, and biogeography, considering these overly anecdotal. Instead they focused on topics like physiology, and genetics that could be easily investigated with controlled experiments in the laboratory, and discounted natural selection and the degree to which organisms were adapted to their environment, which could not easily be tested experimentally.
Anti-Darwinist theories during the eclipse
Theistic evolution
British science developed in the early 19th century on a basis of natural theology which saw the adaptation of fixed species as evidence that they had been specially created to a purposeful divine design. The philosophical concepts of German idealism inspired concepts of an ordered plan of harmonious creation, which Richard Owen reconciled with natural theology as a pattern of homology showing evidence of design. Similarly, Louis Agassiz saw Ernest Haeckel's recapitulation theory, which held that the embryological development of an organism repeats its evolutionary history, as symbolising a pattern of the sequence of creations in which humanity was the goal of a divine plan. In 1844 Vestiges adapted Agassiz's concept into theistic evolutionism. Its anonymous author Robert Chambers proposed a "law" of divinely ordered progressive development, with transmutation of species as an extension of recapitulation theory. This popularised the idea, but it was strongly condemned by the scientific establishment. Agassiz remained forcefully opposed to evolution, and after he moved to America in 1846 his idealist argument from design of orderly development became very influential. In 1858 Owen cautiously proposed that this development could be a real expression of a continuing creative law, but distanced himself from transmutationists. Two years later, in his review of On the Origin of Species, Owen attacked Darwin while at the same time openly supporting evolution, expressing belief in a pattern of transmutation by law-like means. This idealist argument from design was taken up by other naturalists such as George Jackson Mivart, and the Duke of Argyll who rejected natural selection altogether in favor of laws of development that guided evolution down preordained paths.
Many of Darwin's supporters accepted evolution on the basis that it could be reconciled with design. In particular, Asa Gray considered natural selection to be the main mechanism of evolution and sought to reconcile it with natural theology. He proposed that natural selection could be a mechanism in which the problem of evil of suffering produced the greater good of adaptation, but conceded that this had difficulties and suggested that God might influence the variations on which natural selection acted to guide evolution. For Darwin and Thomas Henry Huxley such pervasive supernatural influence was beyond scientific investigation, and George Frederick Wright, an ordained minister who was Gray's colleague in developing theistic evolution, emphasised the need to look for secondary or known causes rather than invoking supernatural explanations: "If we cease to observe this rule there is an end to all science and all sound science."
A secular version of this methodological naturalism was welcomed by a younger generation of scientists who sought to investigate natural causes of organic change, and rejected theistic evolution in science. By 1872 Darwinism in its broader sense of the fact of evolution was accepted as a starting point. Around 1890 only a few older men held onto the idea of design in science, and it had completely disappeared from mainstream scientific discussions by 1900. There was still unease about the implications of natural selection, and those seeking a purpose or direction in evolution turned to neo-Lamarckism or orthogenesis as providing natural explanations.
Neo-Lamarckism
Jean-Baptiste Lamarck had originally proposed a theory on the transmutation of species that was largely based on a progressive drive toward greater complexity. Lamarck also believed, as did many others in the 19th century, that characteristics acquired during the course of an organism's life could be inherited by the next generation, and he saw this as a secondary evolutionary mechanism that produced adaptation to the environment. Typically, such characteristics included changes caused by the use or disuse of a particular organ. It was this mechanism of evolutionary adaptation through the inheritance of acquired characteristics that much later came to be known as Lamarckism. Although Alfred Russel Wallace completely rejected the concept in favor of natural selection, Darwin always included what he called Effects of the increased Use and Disuse of Parts, as controlled by Natural Selection in On the Origin of Species, giving examples such as large ground feeding birds getting stronger legs through exercise, and weaker wings from not flying until, like the ostrich, they could not fly at all.
In the late 19th century the term neo-Lamarckism came to be associated with the position of naturalists who viewed the inheritance of acquired characteristics as the most important evolutionary mechanism. Advocates of this position included the British writer and Darwin critic Samuel Butler, the German biologist Ernst Haeckel, the American paleontologists Edward Drinker Cope and Alpheus Hyatt, and the American entomologist Alpheus Packard. They considered Lamarckism to be more progressive and thus philosophically superior to Darwin's idea of natural selection acting on random variation. Butler and Cope both believed that this allowed organisms to effectively drive their own evolution, since organisms that developed new behaviors would change the patterns of use of their organs and thus kick-start the evolutionary process. In addition, Cope and Haeckel both believed that evolution was a progressive process. The idea of linear progress was an important part of Haeckel's recapitulation theory. Cope and Hyatt looked for, and thought they found, patterns of linear progression in the fossil record. Packard argued that the loss of vision in the blind cave insects he studied was best explained through a Lamarckian process of atrophy through disuse combined with inheritance of acquired characteristics.
Many American proponents of neo-Lamarckism were strongly influenced by Louis Agassiz, and a number of them, including Hyatt and Packard, were his students. Agassiz had an idealistic view of nature, connected with natural theology, that emphasized the importance of order and pattern. Agassiz never accepted evolution; his followers did, but they continued his program of searching for orderly patterns in nature, which they considered to be consistent with divine providence, and preferred evolutionary mechanisms like neo-Lamarckism and orthogenesis that would be likely to produce them.
In Britain the botanist George Henslow, the son of Darwin's mentor John Stevens Henslow, was an important advocate of neo-Lamarckism. He studied how environmental stress affected the development of plants, and he wrote that the variations induced by such environmental factors could largely explain evolution. The historian of science Peter J. Bowler writes that, as was typical of many 19th century Lamarckians, Henslow did not appear to understand the need to demonstrate that such environmentally induced variations would be inherited by descendants that developed in the absence of the environmental factors that produced them, but merely assumed that they would be.
Polarising the argument: Weismann's germ plasm
Critics of neo-Lamarckism pointed out that no one had ever produced solid evidence for the inheritance of acquired characteristics. The experimental work of the German biologist August Weismann resulted in the germ plasm theory of inheritance. This led him to declare that inheritance of acquired characteristics was impossible, since the Weismann barrier would prevent any changes that occurred to the body after birth from being inherited by the next generation. This effectively polarised the argument between the Darwinians and the neo-Lamarckians, as it forced people to choose whether to agree or disagree with Weismann and hence with evolution by natural selection. Despite Weismann's criticism, neo-Lamarckism remained the most popular alternative to natural selection at the end of the 19th century, and would remain the position of some naturalists well into the 20th century.
Baldwin effect
As a consequence of the debate over the viability of neo-Lamarckism, in 1896 James Mark Baldwin, Henry Fairfield Osborne and C. Lloyd Morgan all independently proposed a mechanism where new learned behaviors could cause the evolution of new instincts and physical traits through natural selection without resort to the inheritance of acquired characteristics. They proposed that if individuals in a species benefited from learning a particular new behavior, the ability to learn that behavior could be favored by natural selection, and the result would be the evolution of new instincts and eventually new physical adaptations. This became known as the Baldwin effect and it has remained a topic of debate and research in evolutionary biology ever since.
Orthogenesis
Orthogenesis was the theory that life has an innate tendency to change, in a unilinear fashion in a particular direction. The term was popularized by Theodor Eimer, a German zoologist, in his 1898 book On Orthogenesis: And the Impotence of Natural Selection in Species Formation. He had studied the coloration of butterflies, and believed he had discovered non-adaptive features which could not be explained by natural selection. Eimer also believed in Lamarckian inheritance of acquired characteristics, but he felt that internal laws of growth determined which characteristics would be acquired and guided the long term direction of evolution down certain paths.
Orthogenesis had a significant following in the 19th century, its proponents including the Russian biologist Leo S. Berg, and the American paleontologist Henry Fairfield Osborn. Orthogenesis was particularly popular among some paleontologists, who believed that the fossil record showed patterns of gradual and constant unidirectional change. Those who accepted this idea, however, did not necessarily accept that the mechanism driving orthogenesis was teleological (goal-directed). They did believe that orthogenetic trends were non-adaptive; in fact they felt that in some cases they led to developments that were detrimental to the organism, such as the large antlers of the Irish elk that they believed led to the animal's extinction.
Support for orthogenesis began to decline during the modern synthesis in the 1940s, when it became apparent that orthogenesis could not explain the complex branching patterns of evolution revealed by statistical analysis of the fossil record by paleontologists. A few biologists however hung on to the idea of orthogenesis as late as the 1950s, claiming that the processes of macroevolution, the long term trends in evolution, were distinct from the processes of microevolution.
Mutationism
Mutationism was the idea that new forms and species arose in a single step as a result of large mutations. It was seen as a much faster alternative to the Darwinian concept of a gradual process of small random variations being acted on by natural selection. It was popular with early geneticists such as Hugo de Vries, who along with Carl Correns helped rediscover Gregor Mendel's laws of inheritance in 1900, William Bateson a British zoologist who switched to genetics, and early in his career, Thomas Hunt Morgan.
The 1901 mutation theory of evolution held that species went through periods of rapid mutation, possibly as a result of environmental stress, that could produce multiple mutations, and in some cases completely new species, in a single generation. Its originator was the Dutch botanist Hugo de Vries. De Vries looked for evidence of mutation extensive enough to produce a new species in a single generation and thought he found it with his work breeding the evening primrose of the genus Oenothera, which he started in 1886. The plants that de Vries worked with seemed to be constantly producing new varieties with striking variations in form and color, some of which appeared to be new species because plants of the new generation could only be crossed with one another, not with their parents. DeVries himself allowed a role for natural selection in determining which new species would survive, but some geneticists influenced by his work, including Morgan, felt that natural selection was not necessary at all. De Vries's ideas were influential in the first two decades of the 20th century, as some biologists felt that mutation theory could explain the sudden emergence of new forms in the fossil record; research on Oenothera spread across the world. However, critics including many field naturalists wondered why no other organism seemed to show the same kind of rapid mutation.
Morgan was a supporter of de Vries's mutation theory and was hoping to gather evidence in favor of it when he started working with the fruit fly Drosophila melanogaster in his lab in 1907. However, it was a researcher in that lab, Hermann Joseph Muller, who determined in 1918 that the new varieties de Vries had observed while breeding Oenothera were the result of polyploid hybrids rather than rapid genetic mutation. While they were doubtful of the importance of natural selection, the work of geneticists like Morgan, Bateson, de Vries and others from 1900 to 1915 established Mendelian genetics linked to chromosomal inheritance, which validated August Weismann's criticism of neo-Lamarckian evolution by discounting the inheritance of acquired characteristics. The work in Morgan's lab with Drosophila also undermined the concept of orthogenesis by demonstrating the random nature of mutation.
End of the eclipse
During the period 1916–1932, the discipline of population genetics developed largely through the work of the geneticists Ronald Fisher, J.B.S. Haldane, and Sewall Wright. Their work recognized that the vast majority of mutations produced small effects that served to increase the genetic variability of a population rather than creating new species in a single step as the mutationists assumed. They were able to produce statistical models of population genetics that included Darwin's concept of natural selection as the driving force of evolution.
Developments in genetics persuaded field naturalists such as Bernhard Rensch and Ernst Mayr to abandon neo-Lamarckian ideas about evolution in the early 1930s. By the late 1930s, Mayr and Theodosius Dobzhansky had synthesized the ideas of population genetics with the knowledge of field naturalists about the amount of genetic diversity in wild populations, and the importance of genetically distinct subpopulations (especially when isolated from one another by geographical barriers) to create the early 20th century modern synthesis. In 1944 George Gaylord Simpson integrated paleontology into the synthesis by statistically analyzing the fossil record to show that it was consistent with the branching non-directional form of evolution predicted by the synthesis, and in particular that the linear trends cited by earlier paleontologists in support of Lamarckism and orthogenesis did not stand up to careful analysis. Mayr wrote that by the end of the synthesis natural selection together with chance mechanisms like genetic drift had become the universal explanation for evolutionary change.
Historiography
The concept of eclipse suggests that Darwinian research paused, implying in turn that there had been a preceding period of vigorously Darwinian activity among biologists. However, historians of science such as Mark Largent have argued that while biologists broadly accepted the extensive evidence for evolution presented in The Origin of Species, there was less enthusiasm for natural selection as a mechanism. Biologists instead looked for alternative explanations more in keeping with their worldviews, which included the beliefs that evolution must be directed and that it constituted a form of progress. Further, the idea of a dark eclipse period was convenient to scientists such as Julian Huxley, who wished to paint the modern synthesis as a bright new achievement, and accordingly to depict the preceding period as dark and confused. Huxley's 1942 book Evolution: The Modern Synthesis therefore, argued Largent, suggested that the so-called modern synthesis began after a long period of eclipse lasting until the 1930s, in which Mendelians, neo-Lamarckians, mutationists, and Weismannians, not to mention experimental embryologists and Haeckelian recapitulationists fought running battles with each other. The idea of an eclipse also allowed Huxley to step aside from what was to him the inconvenient association of evolution with aspects such as social Darwinism, eugenics, imperialism, and militarism. Accounts such as Michael Ruse's very large book Monad to Man ignored, claimed Largent, almost all the early 20th century American evolutionary biologists. Largent has suggested as an alternative to eclipse a biological metaphor, the interphase of Darwinism, interphase being an apparently quiet period in the cycle of cell division and growth.
See also
Coloration evidence for natural selection
Objections to evolution
Notes
References
Sources
Evolutionary biology
History of science
Non-Darwinian evolution | The eclipse of Darwinism | Technology,Biology | 4,441 |
1,670,763 | https://en.wikipedia.org/wiki/Inrush%20current | Inrush current, input surge current, or switch-on surge is the maximal instantaneous input current drawn by an electrical device when first turned on. Alternating-current electric motors and transformers may draw several times their normal full-load current when first energized, for a few cycles of the input waveform. Power converters also often have inrush currents much higher than their steady-state currents, due to the charging current of the input capacitance. The selection of over-current-protection devices such as fuses and circuit breakers is made more complicated when high inrush currents must be tolerated. The over-current protection must react quickly to overload or short-circuit faults but must not interrupt the circuit when the (usually harmless) inrush current flows.
Capacitors
A discharged or partially charged capacitor appears as a short circuit to the source when the source voltage is higher than the potential of the capacitor. A fully discharged capacitor will take approximately 5 RC time periods to fully charge; during the charging period, instantaneous current can exceed steady-state current by a substantial multiple. Instantaneous current declines to steady-state current as the capacitor reaches full charge. In the case of open circuit, the capacitor will be charged to the peak AC voltage (one cannot actually charge a capacitor with AC line power, so this refers to a varying but unidirectional voltage; e.g., the voltage output from a rectifier).
In the case of charging a capacitor from a linear DC voltage, like that from a battery, the capacitor will still appear as a short circuit; it will draw current from the source limited only by the internal resistance of the source and ESR of the capacitor. In this case, charging current will be continuous and decline exponentially to the load current. For open circuit, the capacitor will be charged to the DC voltage.
Safeguarding against the filter capacitor’s charging period’s initial current inrush flow is crucial for the performance of the device. Temporarily introducing a high resistance between the input power and rectifier can increase the resistance of the powerup, leading to reducing the inrush current. Using an inrush current limiter for this purpose helps, as it can provide the initial resistance needed.
Transformers
When a transformer is first energized, a transient current up to 10 to 15 times larger than the rated transformer current can flow for several cycles. Toroidal transformers, using less copper for the same power handling, can have up to 60 times inrush to running current.
Worst-case inrush happens when the primary winding is connected at an instant around the zero crossing of the primary voltage (which for a pure inductance would be the current maximum in the AC cycle) and if the polarity of the voltage half-cycle has the same polarity as the remanence in the iron core has (the magnetic remanence was left high from a preceding half cycle). Unless the windings and core are sized to normally never exceed 50% of saturation (and in an efficient transformer they never are, such a construction would be overly heavy and inefficient), then during such a start-up the core will be saturated. This can also be expressed as the remnant magnetism in normal operation is nearly as high as the saturation magnetism at the "knee" of the hysteresis loop. Once the core saturates, however, the winding inductance appears greatly reduced, and only the resistance of the primary-side windings and the impedance of the power line are limiting the current. As saturation occurs for part half-cycles only, harmonic-rich waveforms can be generated and can cause problems to other equipment.
For large transformers with low winding resistance and high inductance, these inrush currents can last for several seconds until the transient has died away (decay time proportional to XL/R) and the regular AC equilibrium is established. To avoid magnetic inrush, only for transformers with an air gap in the core, the inductive load needs to be synchronously connected near a supply voltage peak, in contrast with the zero-voltage switching, which is desirable to minimize sharp-edged current transients with resistive loads such as high-power heaters. But for toroidal transformers only a premagnetising procedure before switching on allows to start those transformers without any inrush-current peak.
Inrush current can be divided in three categories:
Energization inrush current result of re-energization of transformer. The residual flux in this case can be zero or depending on energization timing.
Recovery inrush current flow when transformer voltage is restored after having been reduced by system disturbance.
Sympathetic inrush current flow when multiple transformer connected in same line and one of them energized.
Motors
When an electric motor, AC or DC, is first energized, the rotor is not moving, and a current equivalent to the stalled current will flow, reducing as the motor picks up speed and develops a back EMF to oppose the supply. AC induction motors behave as transformers with a shorted secondary until the rotor begins to move, while brushed motors present essentially the winding resistance. The duration of the starting transient is less if the mechanical load on the motor is relieved until it has picked up speed.
For high-power motors, the winding configuration may be changed (wye at start and then delta) during start-up to reduce the current drawn.
Heaters and filament lamps
Metals have a positive temperature coefficient of resistance; they have lower resistance when cold. Any electrical load that contains a substantial component of metallic resistive heating elements, such as an electric kiln or a bank of tungsten-filament incandescent bulbs, will draw a high current until the metallic element reaches operating temperature. For example, wall switches intended to control incandescent lamps will have a "T" rating, indicating that they can safely control circuits with the large inrush currents of incandescent lamps. The inrush may be as much as 14 times the steady-state current and may persist for a few milliseconds for smaller lamps up to several seconds for lamps of 500 watts or more. (Non-graphitized) carbon-filament lamps, rarely used now, have a negative temperature coefficient and draw more current as they warm up; an "inrush" current is not found with these types.
Protection
A resistor in series with the line can be used to limit the current charging input capacitors. However, this approach is not very efficient, especially in high-power devices, since the resistor will have a voltage drop and dissipate some power.
Inrush current can also be reduced by inrush current limiters. Negative-temperature-coefficient (NTC) thermistors are commonly used in switching power supplies, motor drives and audio equipment to prevent damage caused by inrush current. A thermistor is a thermally-sensitive resistor with a resistance that changes significantly and predictably as a result of temperature changes. The resistance of an NTC thermistor decreases as its temperature increases.
As the inrush current limiter self-heats, the current begins to flow through it and warm it. Its resistance begins to drop, and a relatively small current flow charges the input capacitors. After the capacitors in the power supply become charged, the self-heated inrush current limiter offers little resistance in the circuit, with a low voltage drop with respect to the total voltage drop of the circuit. A disadvantage is that immediately after the device is switched off, the NTC resistor is still hot and has a low resistance. It cannot limit the inrush current unless it cools for more than 1 minute to get a higher resistance. Another disadvantage is that the NTC thermistor is not short-circuit-proof.
Another way to avoid the transformer inrush current is a "transformer switching relay". This does not need time for cool down. It can also deal with power-line half-wave voltage dips and is short-circuit-proof. This technique is important for IEC 61000-4-11 tests.
Another option, particularly for high-voltage circuits, is to use a pre-charge circuit. The circuit would support a current-limited precharge mode during the charging of capacitors and then switch to an unlimited mode for normal operation when the voltage on the load is 90% of full charge.
Switch-off spike
When a transformer, electric motor, electromagnet, or other inductive load is switched off, the inductor increases the voltage across the switch or breaker and cause extended arcing. When a transformer is switched off on its primary side, inductive kick produces a voltage spike on the secondary that can damage insulation and connected loads.
See also
Ripple (electrical)
References
External links
IEC 61000–4–30, Electromagnetic Compatibility (EMC) – Testing and measurement techniques – Power quality measurement methods, Published by The International Electrotechnical Commission, 2003.
Electrical parameters | Inrush current | Engineering | 1,915 |
21,080,132 | https://en.wikipedia.org/wiki/C8H6O | {{DISPLAYTITLE:C8H6O}}
The molecular formula C8H6O (molar mass: 118.13 g/mol, exact mass: 118.0419 u) may refer to:
Benzofuran
Isobenzofuran, or 2-Benzofuran
Molecular formulas | C8H6O | Physics,Chemistry | 68 |
4,576,485 | https://en.wikipedia.org/wiki/Brauer%27s%20theorem%20on%20forms | There also is Brauer's theorem on induced characters.
In mathematics, Brauer's theorem, named for Richard Brauer, is a result on the representability of 0 by forms over certain fields in sufficiently many variables.
Statement of Brauer's theorem
Let K be a field such that for every integer r > 0 there exists an integer ψ(r) such that for n ≥ ψ(r) every equation
has a non-trivial (i.e. not all xi are equal to 0) solution in K.
Then, given homogeneous polynomials f1,...,fk of degrees r1,...,rk respectively with coefficients in K, for every set of positive integers r1,...,rk and every non-negative integer l, there exists a number ω(r1,...,rk,l) such that for n ≥ ω(r1,...,rk,l) there exists an l-dimensional affine subspace M of Kn (regarded as a vector space over K) satisfying
An application to the field of p-adic numbers
Letting K be the field of p-adic numbers in the theorem, the equation (*) is satisfied, since , b a natural number, is finite. Choosing k = 1, one obtains the following corollary:
A homogeneous equation f(x1,...,xn) = 0 of degree r in the field of p-adic numbers has a non-trivial solution if n is sufficiently large.
One can show that if n is sufficiently large according to the above corollary, then n is greater than r2. Indeed, Emil Artin conjectured that every homogeneous polynomial of degree r over Qp in more than r2 variables represents 0. This is obviously true for r = 1, and it is well known that the conjecture is true for r = 2 (see, for example, J.-P. Serre, A Course in Arithmetic, Chapter IV, Theorem 6). See quasi-algebraic closure for further context.
In 1950 Demyanov verified the conjecture for r = 3 and p ≠ 3, and in 1952 D. J. Lewis independently proved the case r = 3 for all primes p. But in 1966 Guy Terjanian constructed a homogeneous polynomial of degree 4 over Q2 in 18 variables that has no non-trivial zero. On the other hand, the Ax–Kochen theorem shows that for any fixed degree Artin's conjecture is true for all but finitely many Qp.
Notes
References
Diophantine equations
Theorems in number theory
P-adic numbers | Brauer's theorem on forms | Mathematics | 545 |
40,832,361 | https://en.wikipedia.org/wiki/Your%20Baby%27s%20Best%20Shot | Your Baby's Best Shot: Why Vaccines are Safe and Save Lives is a 2012 pro-vaccine book, published by Rowman and Littlefield, and written by E. Allison Hagood, a psychology professor, and Stacy Mintzer Herlihy, a freelance writer from Roseland, New Jersey. The foreword was written by Paul Offit.
Summary
The book's introduction states: "I hope that anyone reading this book will read it and gain an understanding why vaccines are so vitally important to the health and well being of all of us."
Chapter 1 describes who the authors are, their own experiences with vaccines, and what motivated them to write this book.
Chapter 2 focuses on the story of Edward Jenner and the development of the first vaccine.
Chapter 3 discusses the biological mechanisms by which vaccines work, as well as the concept of herd immunity.
Chapter 4 discusses the anti-vaccine claims about how vaccines contain "dangerous ingredients" such as formaldehyde and polymyxin B.
Reviews
Kristen Kemp of Parents called the book a "great resource", and Publishers Weekly wrote that it was "extensively researched and forceful." Another positive review came from Booklist, where Karen Springen wrote that "This thoroughly researched book should convince even ardent vaccine skeptics that the benefits of giving kids shots to prevent illnesses far outweigh any negatives," and David Gorski wrote that the book was "essential reading for all new parents with any doubts at all about vaccines."
References
2012 non-fiction books
History books about medicine
Rowman & Littlefield books
Vaccine hesitancy
Vaccine controversies | Your Baby's Best Shot | Chemistry,Biology | 324 |
1,432,190 | https://en.wikipedia.org/wiki/Paul%20Benacerraf | Paul Joseph Salomon Benacerraf (; born March 26, 1931) is a French-born American philosopher working in the field of the philosophy of mathematics who taught at Princeton University his entire career, from 1960 until his retirement in 2007. He was appointed Stuart Professor of Philosophy in 1974, and retired as the James S. McDonnell Distinguished University Professor of Philosophy.
Life and career
Benacerraf was born in Paris to a Moroccan-Venezuelan Sephardic Jewish father, Abraham Benacerraf, and Algerian Jewish mother, Henrietta Lasry. In 1939 the family moved to Caracas and then to New York City.
When the family returned to Caracas, Benacerraf remained in the United States, boarding at the Peddie School in Hightstown, New Jersey. He attended Princeton University for both his undergraduate and graduate studies.
He was elected a fellow of the American Academy of Arts and Sciences in 1998.
His brother was the Venezuelan Nobel Prize-winning immunologist Baruj Benacerraf.
Philosophical work
Benacerraf is perhaps best known for his two papers "What Numbers Could Not Be" (1965) and "Mathematical Truth" (1973), and for his anthology on the philosophy of mathematics, co-edited with Hilary Putnam.
In "What Numbers Could Not Be" (1965), Benacerraf argues against a Platonist view of mathematics, and for structuralism, on the ground that what is important about numbers is the abstract structures they represent rather than the objects that number words ostensibly refer to. In particular, this argument is based on the point that Ernst Zermelo and John von Neumann give distinct, and completely adequate, identifications of natural numbers with sets (see Zermelo ordinals and von Neumann ordinals). This argument is called Benacerraf's identification problem.
In "Mathematical Truth" (1973), he argues that no interpretation of mathematics offers a satisfactory package of epistemology and semantics; it is possible to explain mathematical truth in a way that is consistent with our syntactico-semantical treatment of truth in non-mathematical language, and it is possible to explain our knowledge of mathematics in terms consistent with a causal account of epistemology, but it is in general not possible to accomplish both of these objectives simultaneously (this argument is called Benacerraf's epistemological problem). He argues for this on the grounds that an adequate account of truth in mathematics implies the existence of abstract mathematical objects, but that such objects are epistemologically inaccessible because they are causally inert and beyond the reach of sense perception. On the other hand, an adequate epistemology of mathematics, say one that ties truth-conditions to proof in some way, precludes understanding how and why the truth-conditions have any bearing on truth.
Sexual harassment allegation
Elisabeth Lloyd has alleged that while she was a PhD student at Princeton, Benacerraf "petted and touched" her every day. She said, "It was just an extra price I had to pay, that the men did not have to pay, in order to get my Ph.D." Benacerraf has denied the allegations, stating in an email to The Chronicle that he was "genuinely puzzled" by the accusations and does not know what prompted them. "I am not the sort of person that she describes in her interview", he said. "Yet I do not doubt her sincerity or the depth of the feelings that she reports", he added.
Publications
Benacerraf, Paul (1960) Logicism, Some Considerations, Princeton, Ph.D. Dissertation, University Microfilms.
———— (1965) "What Numbers Could Not Be", The Philosophical Review, 74:47–73.
———— (1967) "God, the Devil, and Gödel" , The Monist, 51: 9–33.
———— (1973) "Mathematical Truth", The Journal of Philosophy, 70: 661–679.
———— (1981) "Frege: The Last Logicist", The Foundations of Analytic Philosophy, Midwest Studies in Philosophy, 6: 17–35.
———— (1985) "Skolem and the Skeptic", Proceedings of the Aristotelian Society, Supplementary Volume 56: 85–115.
———— and Putnam, Hilary (eds.) (1983) Philosophy of Mathematics : Selected Readings 2nd edition, Cambridge University Press: New York.
———— (1996) "Recantation or Any old ω-sequence would do after all", Philosophia Mathematica, 4: 184–189.
———— (1996) What Mathematical Truth Could Not Be – I, in Benacerraf and His Critics, A. Morton and S. P. Stich, eds., Blackwell's, Oxford and Cambridge, pp 9–59.
———— (1999) What Mathematical Truth Could Not Be – II, in Sets and Proofs, S. B. Cooper and J. K. Truss, eds., Cambridge University Press, pp. 27–51.
See also
American philosophy
List of American philosophers
References
Further reading
Books about Benacerraf
Zimmermann, Manfred (1995) Wahrheit und Wissen in der Mathematik. Das Benacerrafsche Dilemma, 1. Auflage, Transparent Verlag, Berlin.
Gupta, Anoop K. (2002) Benacerraf's Dilemma and Natural Realism for Mathematics. Ph.D. Dissertation, Ottawa University.
Papers about Benacerraf
Hilton, P. "What 'What Numbers Could Not Be', by Paul Benacerraf', is."
Lucas, J. R. (1968) "Satan stultified: a rejoinder to Paul Benacerraf", The Monist, vol.52, No.1, pp. 145–158.
Articles on Benacerraf
"Benacerraf Interview" by The Dualist and the Stanford Philosophy Department
"Whatever I am now, it happened here" by Caroline Moseley
External links
Paul Benacerraf's homepage at Princeton
The Benacerraf epistemological problem, Internet Encyclopedia of Philosophy
1931 births
Living people
American Mizrahi Jews
20th-century American Sephardic Jews
21st-century French Sephardi Jews
20th-century American philosophers
21st-century American philosophers
American people of Moroccan-Jewish descent
American people of Algerian-Jewish descent
American logicians
Analytic philosophers
Fellows of the American Academy of Arts and Sciences
Jewish philosophers
Mathematical logicians
American metaphysicians
French expatriates in Venezuela
Peddie School alumni
American philosophers of mathematics
Princeton University faculty
Structuralism (philosophy of mathematics)
Writers from Paris
American metaphysics writers
French emigrants to the United States
20th-century French Sephardi Jews
21st-century American Sephardic Jews | Paul Benacerraf | Mathematics | 1,419 |
43,806,125 | https://en.wikipedia.org/wiki/Bytemark | Bytemark is a UK-based server hosting and datacentre provider, headquartered in York, United Kingdom. It was founded in 2002, and was the first provider of virtual machines and cloud hosting through User-mode Linux in 2003.
In 2012, the company launched BigV, a public cloud platform designed in-house using open source software. In 2013, it moved into a £1.2 million datacentre and headquarters in York. In 2017, their BigV platform was renamed Bytemark Cloud. In September 2018, the company was acquired by iomart Group plc.
On 2 February 2023 Bytemark announced that it would cease to support BigV on 30 April 2023.
Environmental and ethical policies
Bytemark's datacentre uses fresh-air cooling, not common in the UK, and was shortlisted for Innovation in Medium Data Center at the DatacenterDynamics Awards EMEA 2013. Each of its servers is built using efficient power supplies as certified by the 80 Plus scheme, which requires power supplies to be at least 80% efficient at up to 100% rated load. To reduce the bias found in traditional recruitment processes, the company developed their own anonymous recruitment process in 2015.
Awards
In 2014, Bytemark was named one of the Top 50 Fastest Grown Tech Companies in the North at the Northern Tech Awards with revenue growth of 44%. Financially, the company turned over £2.5 million in 2013. In 2014, this grew to £3 million. In 2015, the company was awarded the Fair Tax Mark.
Support of open-source projects
Bytemark has a history of contributing to and supporting free software.
They support LibreOffice through provision of a build server. In 2009, the company became a supporter of XBMC with the same. In 2012, they started supporting Cyanogenmod with build servers.
In 2013, the company contributed hosting services worth £150,000 to the Debian project, having used Debian since the company was founded. The company also supported OpenStreetMap with DNS services and servers to support version control, mailing lists and help pages. The company also support projects for social good, including sponsoring servers for mySociety, who operate FixMyStreet, TheyWorkForYou and WhatDoTheyKnow.
References
External links
Bytemark
Companies based in York
Companies established in 2002
Cloud computing providers
Cloud platforms
Data centers
Web hosting | Bytemark | Technology | 484 |
8,563,981 | https://en.wikipedia.org/wiki/OMDoc | OMDoc (Open Mathematical Documents) is a semantic markup format for mathematical documents. While MathML only covers mathematical formulae and the related OpenMath standard only supports formulae and “content dictionaries” containing definitions of the symbols used in formulae, OMDoc covers the whole range of written mathematics.
Coverage
OMDoc allows for mathematical expressions on three levels:
Object levelFormulae, written in Content MathML (the non-presentational subset of MathML), OpenMath or languages for mathematical logic.
Statement levelDefinitions, theorems, proofs, examples and the relations between them (e.g. “this proof proves that theorem”).
Theory levelA theory is a set of contextually related statements. Theories may import each other, thereby forming a graph. Seen as collections of symbol definitions, OMDoc theories are compatible to OpenMath content dictionaries.
On each level, formal syntax and informal natural language can be used, depending on the application.
Semantics and Presentation
OMDoc is a semantic markup language that allows writing down the meaning of texts about mathematics. In contrast to LaTeX, for example, it is not primarily presentation-oriented. An OMDoc document need not specify what its contents should look like. A conversion to LaTeX and XHTML (with Presentation MathML for the formulae) is possible, though. To this end, the presentation of each symbol can be defined.
Applications
Today, OMDoc is used in the following settings:
E-learningCreation of customized textbooks.
Data exchangeOMDoc import and export modules are available for many automated theorem provers and computer algebra systems. OMDoc is intended to be used for communication between mathematical web services.
Document preparationDocuments about mathematics can be prepared in OMDoc and later exported to a presentation-oriented format like LaTeX or XHTML+MathML.
History
OMDoc has been developed by the German mathematician and computer scientist Michael Kohlhase since 1998. So far, there have been the following releases:
1.0 (November 2000)
1.1 (December 2001)
1.2 (July 2006)
Future developments
It is planned to create the infrastructure for a “semantic web for technology and science” based on OMDoc. To this end, OMDoc is being extended towards sciences other than mathematics. The first result is PhysML, an OMDoc variant extended towards physics.
For a better integration with other Semantic Web applications, an OWL ontology of OMDoc is under development, as well as an export facility to RDF.
See also
Mathematical knowledge management
References
Michael Kohlhase (2006): An Open Markup Format for Mathematical Documents (Version 1.2). Lecture Notes in Artificial Intelligence, no. 4180. Springer Verlag, Heidelberg. .
External links
Wiki for OMDoc and related projects
Markup languages
Mathematical markup languages
Semantic Web
XML-based standards | OMDoc | Mathematics,Technology | 608 |
7,995,881 | https://en.wikipedia.org/wiki/Tarari%2C%20Inc. | Tarari is a company that spun out of Intel in 2002. It has created a range of re-programmable silicon based on Xilinx Virtex-4 FPGA (Field Programmable Gate Array) and ASICs that offload and accelerate really complex algorithms such as XML Parsing, scanning for Computer viruses, email spam and intruders in Intrusion-prevention systems and Unified threat management appliances. As well as inspecting content its Content Processors can also transform content and they are used for XML transformation XSLT, compression, encryption as well as HD Video encoding for WMV and VC-1 formats.
In June 2006, Tarari announced that its next generation chips that will support the AMD Torrenza initiative - and it will incorporate HyperTransport interfaces. HyperTransport based-systems offer a dramatically reduced latency and increased throughput. This is because a HyperTransport connected system allows a co-processor to have direct access to the system's HyperTransport bus, and thus as much access to system resources as other conventional CPUs.
PCI-Express and HyperTransport buses both allow systems to communicate at 20-25 Gbit/s versus 4-8 Gbit/s for Peripheral Component Interconnect PCI/PCI-X based systems. Just as the latest desktop machines are using PCI-Express for their high-performance graphic cards now servers will be able to use these high speed interconnects to add other hardware-based co-processors.
PCI-Express and HyperTransport buses both operate serially using multiple lanes - PCI-Express supports 1, 2, 4 or 8 lane connectivity at 2.5 Gbit/s per lane. Whereas PCI/PCI-X works using parallel transfers and is most efficient in the 2k - 4k byte per transfer range, PCI-Express and HyperTransport are very efficient at transfers as small as just 64 bytes. Therefore, applications such as in intrusion-prevention system (IPS) and VOIP security applications which have to examine a large volume of small packets will benefit from such high-speed and highly efficient transfer capabilities.
On 5 September 2007, LSI Corporation announced a definitive agreement to acquire Tarari.
External links
Tarari section of LSI's website
LSI Announces Agreement to Acquire Tarari, Inc. 2007-09-05
Fabless semiconductor companies
Companies established in 2002
Intel
Defunct semiconductor companies of the United States
Defunct computer companies of the United States
Defunct computer hardware companies
pl:Torrenza | Tarari, Inc. | Technology | 521 |
7,449,035 | https://en.wikipedia.org/wiki/Vibrator%20%28mechanical%29 | A vibrator is a mechanical device to generate vibrations. The vibration is often generated by an electric motor with an unbalanced mass on its driveshaft.
There are many different types of vibrator. Typically, they are components of larger products such as smartphones, pagers, or video game controllers with a "rumble" feature.
Vibrators as components
When smartphones and pagers vibrate, the vibrating alert is produced by a small component that is built into the phone or pager. Many older, non-electronic buzzers and doorbells contain a component that vibrates for the purpose of producing a sound. Tattoo machines and some types of electric engraving tools contain a mechanism that vibrates a needle or cutting tool. Aircraft stick shakers use a vibrating mechanism attached to the pilots' control yokes to provide a tactile warning of an impending aerodynamic stall. Eccentric rotating mass (ERM) vibrators work by rotating a deliberately unbalanced weight, so the weight is eccentric. Linear resonant actuators (LRAs) work by repeatedly moving a weight from one side of the actuator to another, using a coil acting as an electromagnet. Coin vibration motors have the shape of a coin and are often of the ERM type.
Industrial vibrators
Vibrators are used in many different industrial applications both as components and as individual pieces of equipment.
Bowl feeders, vibratory feeders and vibrating hoppers are used extensively in the food, pharmaceutical, and chemical industries to move and position bulk material or small component parts. The application of vibration working with the force of gravity can often move materials through a process more effectively than other methods. Vibration is often used to position small components so that they can be gripped mechanically by automated equipment as required for assembly etc.
Vibrating screens are used to separate bulk materials in a mixture of different sized particles. For example, sand, gravel, river rock and crushed rock, and other aggregates are often separated by size using vibrating screens.
Vibrating compactors are used for soil compaction especially in foundations for roads, railways, and buildings.
Concrete vibrators consolidate freshly poured concrete so that trapped air and excess water are released and the concrete settles firmly in place in the formwork. Improper consolidation of concrete can cause product defects, compromise the concrete strength, and produce surface blemishes such as bug holes and honeycombing. An internal concrete vibrator is a steel cylinder about the size of the handle of a baseball bat, with a hose or electrical cord attached to one end. The vibrator head is immersed in the wet concrete.
External concrete vibrators attach, via a bracket or clamp system, to the concrete forms. There are a wide variety of external concrete vibrators available and some vibrator manufacturers have bracket or clamp systems designed to fit the major brands of concrete forms. External concrete vibrators are available in hydraulic, pneumatic or electric power.
Vibrating tables or shake tables are sometimes used to test products to determine or demonstrate their ability to withstand vibration. Testing of this type is commonly done in the automotive, aerospace, and defense industries. These machines are capable of producing three different types of vibration profile sine sweep, random vibration, and synthesized shock. In all three of these applications, the part under test will typically be instrumented with one or more accelerometers to measure component response to the vibration input. A sine sweep vibration profile typically starts vibrating at low frequency and increases in frequency at a set rate (measured in hertz). The vibratory amplitude as measured in gs may increase or decrease as well. A sine sweep will find resonant frequencies in the part. A random vibration profile will excite different frequencies along a spectrum at different times. Significant calculation goes into making sure that all frequencies get excited to within an acceptable tolerance band. A random vibration test suite may range anywhere from 30 seconds up to several hours. It is intended to synthesize the effect of, for example, a car driving over rough terrain or a rocket taking off. A synthesized shock pulse is a short duration high level vibration calculated as a sum of many half-sine waves covering a range of frequencies. It is intended to simulate the effects of an impact or explosion. A shock pulse test typically lasts less than a second. Vibrating tables can also be used in the packaging process in material handling industries to shake or settle a container so it can hold more product.
References
Electric motors
Mechanical vibrations | Vibrator (mechanical) | Physics,Technology,Engineering | 918 |
35,501,694 | https://en.wikipedia.org/wiki/Would%20It%20Kill%20You%20to%20Stop%20Doing%20That%3F | Would It Kill You To Stop Doing That? is a 2012 non-fiction book by the American humorist Henry Alford that details manners from around the world.
After being interested in a quote by Edmund Burke about how important manners are, Alford traveled around the world researching manners. He also interviews people such as etiquette authority Judith Martin.
Reception
Jincy Willet of The New York Times said that the book "amuses as much as it informs". Sarah Halzack of The Washington Post called the book "a bit haphazard" although "his self-effacing tone and dry sense of humor help to unify the pieces". Philip Marchand of National Post said that "sometimes his humour is a bit strained" although his advice can be "very practical".
References
2012 non-fiction books
Comedy books
Etiquette
Twelve (publisher) books | Would It Kill You to Stop Doing That? | Biology | 178 |
30,254,322 | https://en.wikipedia.org/wiki/Ebbw%20Vale%20Steelworks | Ebbw Vale Steelworks was an integrated steel mill located in Ebbw Vale, South Wales. Developed from 1790, by the late 1930s it had become the largest steel mill in Europe. It was nationalised after World War II. As the steel industry changed to bulk handling, iron and steel making was ceased in the 1970s, and the site was redeveloped as a specialised tinplate works. It was closed by Corus in 2002, but is being redeveloped in a joint partnership between Blaenau Gwent Council and the Welsh Government.
Development
By the mid to late 1700s, the steep-sided wooded valley of the Ebbw Fawr river was home to a population of around 120, who worked the valley as farmers.
In 1789, Walter Watkins was the owner of a forge in Glangrwney, near Crickhowell, which lacked an adequate supply of pig iron from the Clydach Ironworks. In agreement with two business partners, his son-in-law Charles Cracroft and iron master Jeremiah Homfray of the Penydarren Ironworks at Merthyr Tydfil, Watkins leased land at Pen y Cae farm in the parish of Aberystruth from John Miles. Situated on the northern tip of the South Wales coalfield and located next to the River Ebbw, they had easy access to the basic iron making materials: coal and iron ore obtained by 'patch' working and local drifts and levels, plus water and power from the river. Limestone was to be transported by mule train from Llanelly Quarries, about four miles away.
The partnership erected a blast furnace and casting shop against the hillside, which created a weekly output of 25 tons of pig iron per week. Called "Pen y cae" after the farming hamlet by the locals, the partners adopted the river's name to form the Ebbw Vale Furnace Company Ltd (EVC), hence naming both the works and the developing township.
In 1793 Homfray bought out his partners with help from the Bristol-based Quaker family the Harfords, who in 1796 bought out Homfray himself to take complete ownership.
Early 19th century
The plant was developed as a specialist forge. Needing additional supplies of iron, the company, now owned by the Harfords family trust, bought and integrated the Sirhowy Ironworks and colliery. The company then built four new cupola furnaces and added steam engine power.
This allowed the company to produce the world's first rolled-steel rail tracks in 1857, later followed by the pioneering Liverpool & Manchester and the Stockton & Darlington Railway.
Transport
The new railway line contracts required additional integration across the production facilities. By the end of the 18th century, both the company and the Tredegar Iron Company needed to transport raw materials to and products from various ironworks in the upper Ebbw Valley, to Newport Docks.
Developments included:
Rassa Railroad: tramway built to connect the Sirhowy Ironworks to the Beaufort Ironworks in Ebbw Vale, and connecting them both to several limestone quarries at Trevil.
Llanhiledd Tramroad: from Crumlin (low level) north to Ebbw Vale.
Sirhowy Tramroad: Newport to Crumlin (low level).
By 1805, a stretch of tramline had been laid to transport coal and iron ore to Newport Docks, laid jointly by Tredegar Iron Company and the Monmouthshire Canal Company. Pulled by teams of horses, in 1829 Chief Engineer Thomas Ellis was authorised to purchase a steam locomotive from the Stephenson Company. Built at Tredegar Works, it made its maiden trip on 17 December 1829.
On grouping in 1923, all of these railway lines became part of the Great Western Railway's Ebbw Vale Line, now operated as a passenger-only service by Transport for Wales.
New owners and expansion
After some commercial failures in the United States, in 1844 the Hardford's family trust sold the works to partners Abraham Darby, Henry Dickenson, Joseph Robinson and J Tothill of Coalbrookdale, with partner Thomas Brown designated managing director. This change started a period of expansion via acquisition, including:
Three blast furnaces of the Victoria Ironworks from Lord Llanover, originally built for the Monmouthshire Iron & Coal Company
Abersychan Ironworks, consisting of six blast furnaces
Production facilities in Pontypool consisting of four furnaces, a forge, tinplate works and coal collieries
Iron ore fields in the Brendon Hills, Somerset, Bilbao, Spain and the Forest of Dean, Gloucestershire
In 1850, the company's chemist George Parry achieved a great economy in blast furnace practice, becoming the first to adopt the cup and cone successfully on blast furnaces. He then conducted experiments in converting iron into steel, but the company was eventually forced to adopt the patented process of Henry Bessemer. By 1863, the company was producing 100,000 tons of rail and merchant bars per annum, from 19 blast furnaces, 192 puddling furnaces, and 99 heating furnaces located at Ebbw Vale, Sirhowy, Victoria, Abersychan, Pontypool and Abercarn. It also had six wharfs at Newport Docks, the hematite mine in the Forest of Dean, and spathic iron ore mines in the Brendon Hills and Spain.
Ebbw Vale Steel, Iron and Coal Company
In June 1868, Darby converted the partnership into a limited company, the Ebbw Vale Steel, Iron and Coal Company (EVSICC), headquartered in Manchester. The capital injection allowed investment in the most powerful blowing engine in the world to serve four of the Ebbw Vale furnaces, new rolling mills and a Bessemer converter shop which produced the first steel ingots, including high carbon spiegel-eisen (mirror iron).
1930s redevelopment
By 1929, a lack of investment had led to a low number of new orders. The oncoming economic depression led to the works being shut down; this resulted in huge redundancies, with minimal maintenance applied to the residual infrastructure. By 1934, unemployment in Ebbw Vale stood at 54% out of a population of 31,000.
In 1935, the UK Government forced the shareholders of EVSICC to sell the site to tin plate manufacturer Richard Beaumont Thomas. He chose to import the UK's first continuous hot rolling mill from the United States, and totally redeveloped the site into a modern steelworks using this technology. Due to the quality of steel produced by the mill, Thomas effectively started the redevelopment of the entire UK steel industry, with the mill producing hot rolled coils instead of bars, billets and plates.
Two and a half years later, production at the site restarted. This drew former steelworkers back to the valley, and by 1948 the plant was the largest in Europe, producing 600,000 tons of rolled steel annually. A lack of manpower drew in migrant workers from all over devastated post-war Europe and the British Empire.
World War II
Most occupations inside the steel works were considered reserved trades, so employees were able to opt out of the compulsory call-up for World War II military service. However, a number of men did decide to enlist, which resulted in some trades being worked throughout the war by women for the first time. The plant drew specific attention from German Luftwaffe bombers on more than one occasion, but the deep valley proved difficult to bomb and the plant survived.
Richard Thomas & Baldwins
In 1948, two of the country's largest steel companies – Richard Thomas, which had plants in Ebbw Vale, Gloucester and the Forest of Dean, and Baldwins, with plants in Stourport and South Wales – agreed to a merger. The new company, Richard Thomas and Baldwins (RTB), became the UK's largest steel maker by volume.
In 1948, RTB introduced the first continuous tinning line at its Ebbw Vale tinplate works.
In 1951, RTB was nationalised and placed under the Iron and Steel Corporation of Great Britain. Under Conservative rule in 1953, it passed to the Iron and Steel Holding and Realisation Agency in readiness for privatisation. However, its size – it was the UK's largest steel company – inhibited its sale. It was still in public ownership when the industry was re-nationalised under British Steel Corporation in 1967.
British Steel
The steelworks was nationalised as part of British Steel in 1967, becoming part of the South Wales group alongside Llanwern and Port Talbot Steelworks. By this time, 14,500 people were employed in the works in and around Ebbw Vale.
The original choice for the site was due to its co-location with both iron ore and coal. However, by the 1970s the industry had changed to one of sheer volume, with supplies drawn from vast mines and pits. If plants were remote from these, they required access to bulk material handling transport facilities, such as deep water ports. Ebbw Vale was neither located near such vast pits, nor bulk shipping facilities. When British Steel announced its 10-year integrated production plan for South Wales, it therefore proposed to stop iron and steel-making operations at Ebbw Vale, and to redevelop the site as a specialist tinplate manufacturer.
The closure of the coke ovens in March 1972 allowed work to commence on removing the 19th century "drill ground" tip, which contained 500,000 tons of waste material. Once the waste removal was complete, the site was back-filled, allowing the cold rolling mill to be extended. This was now able to supply sufficient capacity of rolled steel to a new tinplate complex, the development of which started in 1974 with the commissioning of a newly built hydrochloric acid pickle line.
With staff redeployed to the developing tinplate plant, on 17 July 1975 both the converter shop and all remaining blast furnaces closed, having produced 16,916,523 tons of iron. The continuous hot strip mill ceased operation on 29 September 1977, having rolled 23 million tons of steel since it was commissioned in 1937. Having slabbed 24 million tons of steel, the final cast was made in the open hearth department in May 1978.
Demolition and clearance of these plants allowed the second phase of the tinplate works to begin. This included new constructions of an effluent plant, single stack annealing line, two electrolytic tinning lines (ETL), a cleaning line, and a Hallden Shears plant. Having cost £57 million, the plant was officially opened in June 1978 by Derek Hornby, the President of the Food Manufacturing Federation. It was envisaged in the original plan that a third phase would be constructed to double production again, but the government did not authorise these plans.
Ebbw Vale Garden Festival
By 1981, demolition and clearance of the former iron and steel plants was completed, and the southern boundary of the residual tinplate works was moved inwards. It was on this part of the site that Blaenau Gwent Borough Council approved a bid for the 1992 National Garden Festival, awarded to the council and site in November 1988. It was billed as Garden Festival Wales and attracted over 2 million visitors to South Wales.
Closure
On 6 October 1999, a merger was announced between the Dutch steel company Koninklijke Hoogovens and British Steel to form a new company, Corus.
Although investment had continued at the Ebbw Vale site over the past two decades, No.2 ETL(Electrolytic Tinning Line) was shut down in 1995, and rather than be redeveloped as planned had become a source of spares for the No.1 ETL. Steel production capacity was in excess of the required market in Europe, hence the need for the merger, which would result in the closure of capacity across the newly integrated company. With much tinplate consumption moving to the newly expanding Asian market, on 1 February 2001 Corus announced the complete closure of the Ebbw Vale site, and the resultant loss of 780 jobs.
The plant began a shut-down procedure, with many of the lines within the plant packaged up and transported to other sites in the Corus company (Trostre near Llanelli, and IJmuiden in the Netherlands), while other plants were sold as a package to an Indian-based company.
In July 2002, the Ebbw Vale steel works site closed; a skeleton staff deconstructed the remaining sold plants and handled shipping of residual finished product until December 2002.
Redevelopment
In 2002, Scottish site clearance and demolition contractors Morton assessed the site's land needs for future development. Demolition commenced in August, and the land was remediated over a period of approximately five years.
In 2005, Corus sold the site to Blaenau Gwent Council.
In 2007, a £350 million regeneration project was jointly announced by the council and the Welsh Government. Outline planning permission was granted for a mixed use redevelopment, including housing, retail, offices, wetlands and a learning campus.
The council proposed the development of a £15 million urban village scheme close to the town, which would house a new railway station and elevated access to the main town. The first part of the scheme, Ysbyty Aneurin Bevan, opened in 2010; it was Wales' first all-individual-bed hospital, named after National Health Service founder Aneurin Bevan.
Steelworks General Offices
In October 2011, the Grade II listed former Steelworks General Offices were reopened after a £12 million refit. Originally constructed in 1915–1916, they were redeveloped as a visitor centre and archive. The original building now houses the Ebbw Vale Steelworks Archive Trust, a voluntary organisation which holds an historical record of steel making in Ebbw Vale, and a "4D" immersive cinema. A newly built wing houses the Gwent Archives, which were moved from Cwmbran, providing of shelving to house thousands of documents which date back to the 12th century. HM Queen Elizabeth II officially opened the General Offices as part of her Diamond Jubilee Tour on 3 May 2012, accompanied by the Duke of Edinburgh.
References
External links
TheWorksEbbwVale.co.uk
History at Blaenau Gwent Council
Ebbw Vale steelsworks @ Graces Guide
Ironworks and steelworks in Wales
Buildings and structures in Blaenau Gwent
History of Monmouthshire
1789 establishments in Wales
2002 disestablishments in Wales
Steelworks
Demolished buildings and structures in Wales
Grade II listed buildings in Blaenau Gwent
Buildings and structures demolished in 2002
Industrial buildings completed in 1790 | Ebbw Vale Steelworks | Chemistry | 2,990 |
33,193,835 | https://en.wikipedia.org/wiki/Westwallbunker%20%28Pachten%29 | Westwallbunker is a bunker and museum in Saarland, Germany, that was part of the Siegfried Line. The bunker was built in 1939.
See also
Regelbau
List of surviving elements of the Siegfried Line
External links
www.bunker20.de
Museums in Saarland
Siegfried Line
Buildings and structures in Saarlouis (district) | Westwallbunker (Pachten) | Engineering | 73 |
58,911,580 | https://en.wikipedia.org/wiki/Erika%20Mar%C3%ADn-Spiotta | Erika Marín-Spiotta is a biogeochemist and ecosystem ecologist. She is currently Professor of Geography at the University of Wisconsin-Madison. She is best-known for her research of the terrestrial carbon cycle and is an advocate for underrepresented groups in the sciences, specifically women.
Early life and education
Marín-Spiotta grew up in Spain. She became interested in her area of study and spending time outside with her family, visiting archeological sites. In 1997, she graduated from Stanford University with a B.S. in Biology with a Minor in Political Science. Nine years later, Marín-Spiotta completed her Ph.D. of Ecosystem Science from University of California, Berkeley. Her thesis, “Controls on above and belowground carbon storage during tropical reforestation,” contributed to her field as it discussed how changes in land use affect carbon sequestration in soil and how the establishment of secondary forests can contribute to biodiversity conservation. During her time at UC-Berkeley, she was a Graduate Research Environmental Fellow for the Department of Energy.
Career
After working as a NSF Postdoctoral Research Fellow in the Department of Geography at the University of California, Santa Barbara, Marín-Spiotta joined the faculty at UW-Madison in 2009 as an Assistant Professor in Geography. In the following years, she also became an affiliate of the Latin American, Caribbean, and Iberian Studies Program and Center for Culture, History, and the Environment, the Nelson Institute for Environmental Studies, and the Departments of Soil Science and Forest and Wildlife Ecology. Marín-Spiotta was promoted to Associate Professor in 2015 and to Professor in 2019 in the Department of Geography. In 2019, Marin-Spiotta was awarded the Presidential Early Career Award for Scientists & Engineers (PECASE).
Research
Marín-Spiotta focuses on the ways in which both human-caused land use and climate changes affect biodiversity, biomass, and the biogeochemistry of the atmosphere, water, and soil, in particular as related to terrestrial carbon cycling. Her research offers insight into a variety of fields and the intersections between them, including soil science, biogeochemistry, ecosystem ecology, and geography. Specifically, much of her research looks at these impacts in tropical ecosystems and the strength and ability of forests in different stages of succession to store and sequester carbon. More recently, Marín-Spiotta has conducted research about paleosols and how the carbon stored in these soils could be a potential driver of climate change in the future. Following this research, Marín-Spiotta and her colleague were awarded a National Science Foundation grant to continue investigating the role of deep soil carbon in the carbon cycle. Currently, Marín-Spiotta’s lab researches a variety of projects across a range of questions focused on how global change is altering ecosystems and critical global elemental cycles.
Selected publications
Marín-Spiotta, E. 2018. Harassment should count as scientific misconduct. Nature 557: 141.
Berhe, A.A., R. Barnes, J. Six and E. Marín-Spiotta. 2018. Role of erosional mass movement on the biogeochemical cycling of essential elements: carbon, nitrogen, and phosphorus. Annual Review of Earth and Planetary Sciences 46: 521-548.
Powers, J.S. and E. Marín-Spiotta. 2017. Ecosystem processes and biogeochemical cycles during secondary tropical forest succession. Annual Review of Ecology, Evolution and Systematics 48:497-519.
Marín-Spiotta, E., N.T. Chaopricha, A. F. Plante, A.F. Diefendorf, C.W. Müller, S. Grandy and J.A. Mason. (2014). "Long-term stabilization of deep soil carbon by fire and burial during early Holocene climate change." Nature Geoscience Vol. 7.
Marín-Spiotta, E., K.E. Gruley, J. Crawford, E.A. Atkinson, J.R. Miesel, S. Greene, C. Cardona-Correa and R.G.M. Spencer. (2014). "Paradigm shifts in soil organic matter research affect aquatic carbon turnover interpretations: Transcending disciplinary and ecosystem boundaries." Biogeochemistry Vol. 117.
Marín-Spiotta, E. and S. Sharma. (2013). "Carbon storage in successional and plantation forest soils: a tropical analysis." Global Ecology and Biogeography Vol. 22.
Marín-Spiotta, E., W.L. Silver, C.W. Swanston, and R. Ostertag. (2009). "Soil organic matter dynamics during 80 years of reforestation of tropical pastures." Global Change Biology Vol. 15.
Marín-Spiotta, E., C.W. Swanston, M.S. Torn, W.L. Silver, and S.D. Burton. (2008). "Chemical and mineral control of soil carbon turnover in abandoned tropical pastures." Geoderma, Vol. 143.
Marín-Spiotta, E., R. Ostertag, and W.L. Silver. (2007). "Long-term patterns in tropical reforestation: plant community composition and aboveground biomass accumulation." Ecological Applications Vol. 17.
Awards and leadership
Marín-Spiotta has received many awards for her contributions in the sciences, mentorship, and inclusion. She was the Secretary of the Biogeosciences section at the American Geophysical Union in 2015 and 2016 and has held various other leadership and volunteer positions for the AGU.
Marín-Spiotta is an advocate for underrepresented groups in the sciences and is committed to increasing awareness about sexual harassment in the field. She is a board member of the Earth Science Women's Network (ESWN) which works to mentor and support women in the geosciences. The National Science Foundation ADVANCE Program awarded her and a team a $1.1 million grant to investigate these issues and establish ways to further the advancement of women in STEM, specifically focusing on how bystander intervention can lead to positive results.
Presidential Early Career Award for Scientists and Engineers (2019)
National Science Foundation’s CAREER Award (2014)
Sulzman Award for Excellence in Education and Mentoring from the American Geophysical Union (2016)
President’s Award from the Association for Women Geoscientists (2016)
Vilas Associate Award from UW-Madison.
Ambassador Award, American Geophysical Union (2020)
References
Spanish ecologists
Biogeochemists
Stanford University alumni
University of California, Berkeley alumni
University of Wisconsin–Madison faculty
Spanish expatriates in the United States
Year of birth missing (living people)
Living people | Erika Marín-Spiotta | Chemistry | 1,395 |
41,740,379 | https://en.wikipedia.org/wiki/Paolo%20Giubellino | Paolo Giubellino (born 9 November 1960) is an experimental particle physicist working on High-Energy Nuclear Collisions. Currently he is the joint Scientific Managing Director of the Facility for Antiproton and Ion Research (FAIR) and the GSI Helmholtz Centre for Heavy Ion Research (GSI) and Professor at the Institute of Nuclear Physics of the Technische Universität Darmstadt.
Until 31 December 2016, Giubellino was Spokesperson of the ALICE: A Large Ion Collider Experiment, an international collaboration of more than 1300 people from 163 scientific institutions from 40 countries.
He has carried several responsibility positions in the ALICE Collaboration since its creation in the early nineties, to be eventually elected Deputy Spokesperson from 2004 to 2010 and Spokesperson from 1 January 2011. In 2011 at the international symposium on subnuclear physics held in Vatican City, he gave a talk The Little Bang in the Laboratory: Heavy Ions @ LHC with ALICE. On 17 July 2013, he was elected for a second term as Spokesperson of ALICE.
Giubellino has dedicated most of his scientific life to the physics of high-energy heavy ion collisions, in which quark–gluon plasma a state of ultra dense and hot matter, as it prevails in the first microseconds of the life of our universe. Moreover, he has participated in numerous experimental projects first at the CERN Super Proton Synchrotron and, since the beginning of the program, at the Large Hadron Collider.
Awards
2000 Honorary title of "Profesor Invitado" of the Istituto de Ciencias y Tecnologias of the University of La Havana, Cuba
2010 Medal of the Division of Particles and Fields from the Mexican Physical Society
2012 Visitante Distinguido of the City of Puebla, Mexico
2012 "commendatore" for scientific merits from Italian President Giorgio Napolitano
2013 Honorable Mention of the Ministry of Education, Science, Research and Sport of the Slovak Republic
2013 Enrico Fermi Prize from the Italian Physical Society.
2014 Lise Meitner Prize from the European Physical Society jointly with Johanna Stachel (Physikalisches Institut der Universität Heidelberg, Germany), Peter Braun-Munzinger (GSI, Germany) and Jürgen Schukraft (CERN).
2016 Member of the Academia Europaea
2016 Doctor Honoris Causa, Suranaree University of Technology, Thailand
2016 Doctor Honoris Causa, National Academy of Sciences of Ukraine, Kyiv, Ukraine
2019 Corresponding Member, Accademia delle Scienze, Torino
Early life and education
Paolo Giubellino, Italian, born in 1960, graduated in physics at the University of Torino in 1983 with 110/110 cum laude and special honorable mention and continued his studies as a
Fulbright fellow at the University of California, Santa Cruz. In 2000 he was awarded the title of Doctor in Physics and Mathematics (Habilitation) by the Dubna Academic Council (Russia). He is married and has one son.
Research career
Paolo Giubellino has dedicated most of his scientific life to the Physics of High-Energy Heavy-Ion collisions, first in HELIOS, then in NA50, in the ALICE experiment and finally at GSI and FAIR.
He joined the Torino branch of the Italian National Institute for Nuclear Physics (INFN) in 1985. In 2006 he was promoted to "research director", the highest in the three-level INFN career. Giubellino has been responsible for several scientific programs within INFN and for NATO, INTAS and EU grants. From 1990 to 1996 he was coordinator of the Group II (one of the five sections in which INFN research is organized) of the Torino branch of the INFN. From 1995 to 2000 and since 2007 he was responsible for the involvement of the Torino group in the ALICE Inner Tracking System project.
Giubellino has participated in CERN heavy-ion programme from the early days of his career. He was in charge of the design, construction and operation of the SCI-PAD detector for the NA34/1 experiment as well as for the NA34/2 silicon pad detectors for the Ring Counters.
In 1988 he joined the NA50 collaboration and worked in one of the fixed-target experiments. He was responsible in NA50 for the design, construction and commissioning of the silicon multiplicity detectors (MD).
During his entire career, Giubellino has participated in several R&D projects directed to the development of silicon detectors and radiation tolerant electronics. He is also one of the founding fathers of the microelectronics group at INFN Torino
Giubellino has been involved in ALICE from the very first feasibility studies, and has later carried a number of responsibilities in the experiment, including Project Leader for the Inner Tracking System, Chair of Conference Committee, Upgrade Coordinator and, for six years, Deputy Spokesperson (July 2000 – September 2002 and August 2006 – December 2010). He was elected Spokesperson of the ALICE Collaboration for the first time in March 2010 and re-elected in July 2013. During this period he led the ALICE Collaboration to the preparation of an Upgrade proposal for experiment, spanning the years 2018 to 2025. The upgrade project, which will involve 163 Institutions from 40 countries, has been approved by the Large Hadron Collider Committee in September 2012.
On 1 January 2017, Paolo Giubellino became the first joint scientific managing director of Facility for Antiproton and Ion Research in Europe GmbH (FAIR GmbH) and GSI Helmholtzzentrum für Schwerionenforschung GmbH in Darmstadt. In addition, he has taken over the position of spokesperson of the management of FAIR and GSI. In September 2016, the FAIR Council and the GSI Supervisory Board announced their decision to appoint Giubellino. Since 1 January 2017 Giubellino is Professor at the Institute of Nuclear Physics of the Technische Universität Darmstadt.
Science Management and Review Committees
Paolo Giubellino serves in many scientific committees and panels in France, Germany, Russia, the United States, Mexico, Spain, the Czech Republic, the Republic of Korea and South Africa. He has been active in International collaboration, and has promoted and had key roles in several programs funded by the European Union, NATO and numerous bilateral agreements.
Member since January 2017 of the EMMI Steering Committee of the Extreme Matter Institute (EMMI), Darmstadt, Germany
Chair, 2003–2011, of the scrutiny group charged of assessing and monitoring the running and maintenance expenses for the CDF International Finance Committee at Fermilab, United States.
Member, 2003 – 2010, of the Conseil Scientifique of the SUBATECH Laboratory, Nantes, France.
Member in 2006 of the 4-yearly CNRS/IN2P3 Evaluation Committee of the SUBATECH Laboratory, Nantes, France.
For the Agence d'Evaluation de la Recherche (AERES) of the French Government: member in 2008 of the Evaluation Committee of the IPN Laboratory in Orsay, Member in 2008 of the Evaluation Committee of the LPSC Laboratory in Grenoble, President in 2010 of the Evaluation Committee of the Subatech Laboratory in Nantes.
Member, 2010 – end of 2015, of the Scientific Council of the IN2P3 (National Institute of Nuclear and Particle Physics) of France.
For the GSI Laboratory in Germany (largest German Nuclear Physics Laboratory): Member of the General Physics Advisory Committee (G-PAC) April 2007– March 2010, Chair of the G-PAC from March 2010 until end of 2016 and as such member of the Laboratory Scientific Council.
Member from Jan 2008 to Dec 2010 of the SPS and PS experiments Committee (SPSC) at CERN.
Member from August 2009 until September 2016 of the EMMI Program Advisory Committee of the Extreme Matter Institute (EMMI), Darmstadt, Germany
Member of the "Phases of Nuclear Matter" working group for the 2004 NUPECC (Nuclear Physics European Collaboration Committee) Long Range Plan.
Convener of the "Phases of nuclear matter" working group for the 2010 NUPECC Long Range Plan.
Member since August 2000 of the Instrumentation Panel of the ICFA, and therefore member of the International Advisory Committee of the ICFA instrumentation schools.
Chair of the Scientific Advisory Committee of the HELEN project (2005/2009), the largest among the ALFA programs of scientific cooperation between Europe and Latin America.
Coordinator Work package 1 of the EPLANET project of scientific cooperation between Europe and Latin America (about 4 M euros, four-year EU program), member of the Scientific Advisory Committee of EPLANET.
Paolo Giubellino is also member of the International Advisory Committee of numerous International Conferences, including the International Conference on High-Energy Physics, ICHEP, and all major conferences in High-Energy Nuclear Physics (Quark Matter, Hard Probes, Strange Quark Matter, ICPAQGP). He has served as referee for several major international Physics Journals, among which Physical Review Letters, Physical Review, Nuclear Physics, Physics Letters and Nuclear Instruments and Methods.
Finally he has been referee for the selection and evaluation of projects for, among others, INTAS, Several European Programs, the Italian Ministry of Education and Research, The Russian Ministry of Education, the Government of the Czech Republic, the Ministry of Economy and Innovation of Spain, the National Research Foundation of the Republic of South Korea and the National Research Foundation of South Africa.
Invited lectures and outreach
Paolo Giubellino is frequently invited to give public lectures on experimental particle physics at the LHC. He has delivered about 50 talks at international conferences and many invited seminars and colloquia about the results of his scientific work, including the closing plenary talk at the 2002 Quark Matter Conference and the plenary talk dedicated to Heavy Ion Physics at the 25th International Nuclear Physics Conference (INPC 2013) in June 2013, and chaired sessions in numerous international conferences. In May 2015, he delivered a talk about the work done at ALICE at the first Italian Conference of Physics Students.
Giubellino has played a significant role in developing collaboration between Europe and Latin American institutes. His support led to Mexico's involvement in ALICE, particularly in the successful construction of the V0 detector and the Cosmic Ray detector. As a recognition of these efforts he has been the first European to be awarded the medal of the Mexican Physical Society.
Giubellino has also taught short courses at various international schools, among which the instrumentation schools of the ICFA and the International school "Enrico Fermi" and for PhD and Master students at Torino University.
References
External links
Scientific publications of Paolo Giubellino on INSPIRE-HEP
21st-century Italian physicists
People associated with CERN
Living people
1960 births
Particle physicists
University of Turin alumni
University of California, Santa Cruz alumni
Academic staff of Technische Universität Darmstadt | Paolo Giubellino | Physics | 2,215 |
8,286,185 | https://en.wikipedia.org/wiki/Superabsorbent%20polymer | A superabsorbent polymer (SAP) (also called slush powder) is a water-absorbing hydrophilic homopolymers or copolymers that can absorb and retain extremely large amounts of a liquid relative to its own mass.
Water-absorbing polymers, which are classified as hydrogels when mixed, absorb aqueous solutions through hydrogen bonding with water molecules. A SAP's ability to absorb water depends on the ionic concentration of the aqueous solution. In deionized and distilled water, a SAP may absorb 300 times its weight (from 30 to 60 times its own volume) and can become up to 99.9% liquid, and when put into a 0.9% saline solution the absorbency drops to approximately 50 times its weight. The presence of valence cations in the solution impedes the polymer's ability to bond with the water molecule.
The SAP's total absorbency and swelling capacity are controlled by the type and degree of cross-linkers used to make the gel. Low-density cross-linked SAPs generally have a higher absorbent capacity and swell to a larger degree. These types of SAPs also have a softer and stickier gel formation. High cross-link density polymers exhibit lower absorbent capacity and swell, and the gel strength is firmer and can maintain particle shape even under modest pressure.
Superabsorbent polymers are crosslinked in order to avoid dissolution. There are three main classes of SAPs:
1. Cross‐linked polyacrylates and polyacrylamides
2. Cellulose‐ or starch‐acrylonitrile graft copolymers
3. Cross‐linked maleic anhydride copolymers
The largest use of SAPs is found in personal disposable hygiene products, such as baby diapers, adult diapers and sanitary napkins. SAPs are also used for blocking water penetration in underground power or communications cable, in self-healing concrete, horticultural water retention agents, control of spill and waste aqueous fluid, and artificial snow for motion picture and stage production. The first commercial use was in 1978 for use in feminine napkins in Japan and disposable bed liners for nursing home patients in the United States. Early applications in the US market were with small regional diaper manufacturers as well as Kimberly Clark.
History
Until the 1920s, water-absorbing materials were fiber-based products. Choices were tissue paper, cotton, sponge, and fluff pulp. The water-absorbing capacity of these types of materials is only up to eleven times their weight and most of it is lost under moderate pressure.
In the early 1960s, the United States Department of Agriculture (USDA) was conducting work on materials to improve water conservation in soils. They developed a resin based on the grafting of acrylonitrile polymer onto the backbone of starch molecules (i.e. starch-grafting). The hydrolyzed product of the hydrolysis of this starch-acrylonitrile co-polymer gave water absorption greater than 400 times its weight. Also, the gel did not release liquid water the way that fiber-based absorbents do.
The polymer came to be known as “Super Slurper”. The USDA gave the technical know-how to several US companies for further development of the basic technology. A wide range of grafting combinations were attempted including work with acrylic acid, acrylamide and polyvinyl alcohol (PVA).
Today's research has proved the ability of natural materials, e.g. polysaccharides and proteins, to perform super absorbent properties in pure water and saline solution (0.9%wt.) within the same range as synthetic polyacrylates do in current applications. Soy protein/poly(acrylic acid) superabsorbent polymers with good mechanical strength have been prepared. Polyacrylate/polyacrylamide copolymers were originally designed for use in conditions with high electrolyte/mineral content and a need for long term stability including numerous wet/dry cycles. Uses include agricultural and horticultural. With the added strength of the acrylamide monomer, used as medical spill control, wire and cable water blocking.
Copolymer chemistry
Superabsorbent polymers are now commonly made from the polymerization of acrylic acid blended with sodium hydroxide in the presence of an initiator to form a poly-acrylic acid sodium salt (sometimes referred to as sodium polyacrylate). This polymer is the most common type of SAP made in the world today. According to the U.S. Food & Drug Administration, sodium polyacrylate is listed in Food Additive Status List, and there are strict limitations.
Other materials are also used to make a superabsorbent polymer, such as polyacrylamide copolymer, ethylene maleic anhydride copolymer, cross-linked carboxymethylcellulose, polyvinyl alcohol copolymers, cross-linked polyethylene oxide, and starch grafted copolymer of polyacrylonitrile to name a few. The latter is one of the oldest SAP forms created.
Today superabsorbent polymers are made using one of three primary methods: gel polymerization, suspension polymerization or solution polymerization. Each of the processes have their respective advantages but all yield a consistent quality of product.
Gel polymerization
A mixture of acrylic acid, water, cross-linking agents and UV initiator chemicals are blended and placed either on a moving belt or in large tubs. The liquid mixture then goes into a "reactor" which is a long chamber with a series of strong UV lights. The UV radiation drives the polymerization and cross-linking reactions. The resulting "logs" are sticky gels containing 60 to 70% water. The logs are shredded or ground and placed in various types of driers. Additional cross-linking agents may be sprayed on the particles' surface; this "surface cross-linking" increases the product's ability to swell under pressure—a property measured as Absorbency Under Load (AUL) or Absorbency Against Pressure (AAP). The dried polymer particles are then screened for proper particle size distribution and packaging. The gel polymerization (GP) method is currently the most popular method for making the sodium polyacrylate superabsorbent polymers now used in baby diapers and other disposable hygienic articles.
Solution polymerization
Solution polymers offer the absorbency of a granular polymer supplied in solution form. Solutions can be diluted with water prior to application, and can coat or saturate most substrates. After drying at a specific temperature for a specific time, the result is a coated substrate with superabsorbency. For example, this chemistry can be applied directly onto wires and cables, though it is especially optimized for use on components such as rolled goods or sheeted substrates.
Solution-based polymerization is commonly used today for SAP manufacture of co-polymers, particularly those with the toxic acrylamide monomer. This process is efficient and generally has a lower capital cost base. The solution process uses a water-based monomer solution to produce a mass of reactant polymerized gel. The polymerization's own exothermic reaction energy is used to drive much of the process, helping reduce manufacturing cost. The reactant polymer gel is then chopped, dried and ground to its final granule size. Any treatments to enhance performance characteristics of the SAP are usually accomplished after the final granule size is created.
Suspension polymerization
The suspension process is practiced by only a few companies because it requires a higher degree of production control and product engineering during the polymerization step. This process suspends the water-based reactant in a hydrocarbon-based solvent. The net result is that the suspension polymerization creates the primary polymer particle in the reactor rather than mechanically in post-reaction stages. Performance enhancements can also be made during, or just after, the reaction stage.
Aviation
On 13 April 2010, Cathay Pacific flight 780 from Surabaya to Hong Kong encountered a dual engine stall whilst descending into Hong Kong International Airport, the aircraft landed safely with no fatalities. The investigation concluded that superabsorbent polymer (SAP) spheres, a component of a fuel filter monitor installed in a fueling dispenser at Juanda International Airport caused the main metering valves in the fuel metering unit to seize. It was discovered that salt water had contaminated the fuel supply at Juanda International Airport, which led to damage of the filter monitors and release of SAP spheres into the aircraft's fuel, eventually entering the main fuel supply lines.
Uses
See also
Sodium polyacrylate
Potassium polyacrylate
Citations
References
Polymers
History of hygiene | Superabsorbent polymer | Chemistry,Materials_science | 1,828 |
4,276,393 | https://en.wikipedia.org/wiki/Group%20with%20operators | In abstract algebra, a branch of mathematics, a group with operators or Ω-group is an algebraic structure that can be viewed as a group together with a set Ω that operates on the elements of the group in a special way.
Groups with operators were extensively studied by Emmy Noether and her school in the 1920s. She employed the concept in her original formulation of the three Noether isomorphism theorems.
Definition
A group with operators can be defined as a group together with an action of a set on :
that is distributive relative to the group law:
For each , the application is then an endomorphism of G. From this, it results that a Ω-group can also be viewed as a group G with an indexed family of endomorphisms of G.
is called the operator domain. The associate endomorphisms are called the homotheties of G.
Given two groups G, H with same operator domain , a homomorphism of groups with operators from to is a group homomorphism satisfying
for all and
A subgroup S of G is called a stable subgroup, -subgroup or -invariant subgroup if it respects the homotheties, that is
for all and
Category-theoretic remarks
In category theory, a group with operators can be defined as an object of a functor category GrpM where M is a monoid (i.e. a category with one object) and Grp denotes the category of groups. This definition is equivalent to the previous one, provided is a monoid (if not, we may expand it to include the identity and all compositions).
A morphism in this category is a natural transformation between two functors (i.e., two groups with operators sharing same operator domain M ). Again we recover the definition above of a homomorphism of groups with operators (with f the component of the natural transformation).
A group with operators is also a mapping
where is the set of group endomorphisms of G.
Examples
Given any group G, (G, ∅) is trivially a group with operators
Given a module M over a ring R, R acts by scalar multiplication on the underlying abelian group of M, so (M, R) is a group with operators.
As a special case of the above, every vector space over a field K is a group with operators (V, K).
Applications
The Jordan–Hölder theorem also holds in the context of groups with operators. The requirement that a group have a composition series is analogous to that of compactness in topology, and can sometimes be too strong a requirement. It is natural to talk about "compactness relative to a set", i.e. talk about composition series where each (normal) subgroup is an operator-subgroup relative to the operator set X, of the group in question.
See also
Group action
Notes
References
Group actions (mathematics)
Universal algebra | Group with operators | Physics,Mathematics | 596 |
23,982,868 | https://en.wikipedia.org/wiki/Simputer%20General%20Public%20License | The Simputer General Public License, or the SGPL is a hardware distribution public copyright license drafted specifically for the purpose of distributing Simputers. As a license it has been loosely modeled on the GPL but in substance it is very different.
The Simputer specifications are released under the terms and conditions of the SGPL. This license permits the user to build a Simputer based upon the specifications and to use the Simputer for non-commercial purposes. Any modifications made to the Simputer specifications may be used exclusively by the person making those modifications with no obligation to release the same to the public domain. However, within 12 months from the date of the first public sale of the Simputer based on these modified specifications, the person who created these modified specifications is bound to disclose the specifications to the Simputer Trust.
The Simputers manufactured under the SGPL are required to be certified by the Simputer Trust before they are allowed to be sold under the Simputer trademark. In order to be so certified they must fulfill the Core Simputer Specifications as disclosed on the simputer website.
All Simputers developed under the specifications must be distributed under the same terms as the SGPL.
External links
Simputer(TM): License: SGPL V1.3
Open hardware licenses
Public copyright licenses | Simputer General Public License | Technology | 274 |
3,487,107 | https://en.wikipedia.org/wiki/Real-time%20polymerase%20chain%20reaction | A real-time polymerase chain reaction (real-time PCR, or qPCR when used quantitatively) is a laboratory technique of molecular biology based on the polymerase chain reaction (PCR). It monitors the amplification of a targeted DNA molecule during the PCR (i.e., in real time), not at its end, as in conventional PCR. Real-time PCR can be used quantitatively and semi-quantitatively (i.e., above/below a certain amount of DNA molecules).
Two common methods for the detection of PCR products in real-time PCR are (1) non-specific fluorescent dyes that intercalate with any double-stranded DNA and (2) sequence-specific DNA probes consisting of oligonucleotides that are labelled with a fluorescent reporter, which permits detection only after hybridization of the probe with its complementary sequence.
The Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) guidelines propose that the abbreviation qPCR be used for quantitative real-time PCR and that RT-qPCR be used for reverse transcription–qPCR. The acronym "RT-PCR" commonly denotes reverse transcription polymerase chain reaction and not real-time PCR, but not all authors adhere to this convention.
Background
Cells in all organisms regulate gene expression by turnover of gene transcripts (single stranded RNA): The amount of an expressed gene in a cell can be measured by the number of copies of an RNA transcript of that gene present in a sample. In order to robustly detect and quantify gene expression from small amounts of RNA, amplification of the gene transcript is necessary. The polymerase chain reaction (PCR) is a common method for amplifying DNA; for RNA-based PCR the RNA sample is first reverse-transcribed to complementary DNA (cDNA) with reverse transcriptase.
In order to amplify small amounts of DNA, the same methodology is used as in conventional PCR using a DNA template, at least one pair of specific primers, deoxyribonucleotide triphosphates, a suitable buffer solution and a thermo-stable DNA polymerase. A substance marked with a fluorophore is added to this mixture in a thermal cycler that contains sensors for measuring the fluorescence of the fluorophore after it has been excited at the required wavelength allowing the generation rate to be measured for one or more specific products.
This allows the rate of generation of the amplified product to be measured at each PCR cycle. The data thus generated can be analysed by computer software to calculate relative gene expression (or mRNA copy number) in several samples. Quantitative PCR can also be applied to the detection and quantification of DNA in samples to determine the presence and abundance of a particular DNA sequence in these samples. This measurement is made after each amplification cycle, and this is the reason why this method is called real time PCR (that is, immediate or simultaneous PCR).
Quantitative PCR and DNA microarray are modern methodologies for studying gene expression. Older methods were used to measure mRNA abundance: differential display, RNase protection assay and northern blot. Northern blotting is often used to estimate the expression level of a gene by visualizing the abundance of its mRNA transcript in a sample. In this method, purified RNA is separated by agarose gel electrophoresis, transferred to a solid matrix (such as a nylon membrane), and probed with a specific DNA or RNA probe that is complementary to the gene of interest. Although this technique is still used to assess gene expression, it requires relatively large amounts of RNA and provides only qualitative or semi quantitative information of mRNA levels. Estimation errors arising from variations in the quantification method can be the result of DNA integrity, enzyme efficiency and many other factors. For this reason a number of standardization systems (often called normalization methods) have been developed. Some have been developed for quantifying total gene expression, but the most common are aimed at quantifying the specific gene being studied in relation to another gene called a normalizing gene, which is selected for its almost constant level of expression. These genes are often selected from housekeeping genes as their functions related to basic cellular survival normally imply constitutive gene expression. This enables researchers to report a ratio for the expression of the genes of interest divided by the expression of the selected normalizer, thereby allowing comparison of the former without actually knowing its absolute level of expression.
The most commonly used normalizing genes are those that code for the following molecules: tubulin, glyceraldehyde-3-phosphate dehydrogenase, albumin, cyclophilin, and ribosomal RNAs.
Basic principles
Real-time PCR is carried out in a thermal cycler with the capacity to illuminate each sample with a beam of light of at least one specified wavelength and detect the fluorescence emitted by the excited fluorophore. The thermal cycler is also able to rapidly heat and chill samples, thereby taking advantage of the physicochemical properties of the nucleic acids and DNA polymerase.
The PCR process generally consists of a series of temperature changes that are repeated 25–50 times. These cycles normally consist of three stages: the first, at around 95 °C, allows the separation of the nucleic acid's double chain; the second, at a temperature of around 50–60 °C, allows the binding of the primers with the DNA template; the third, at between 68 and 72 °C, facilitates the polymerization carried out by the DNA polymerase. Due to the small size of the fragments the last step is usually omitted in this type of PCR as the enzyme is able to replicate the DNA amplicon during the change between the alignment stage and the denaturing stage. In addition, in four-step PCR the fluorescence is measured during short temperature phases lasting only a few seconds in each cycle, with a temperature of, for example, 80 °C, in order to reduce the signal caused by the presence of primer dimers when a non-specific dye is used. The temperatures and the timings used for each cycle depend on a wide variety of parameters, such as: the enzyme used to synthesize the DNA, the concentration of divalent ions and deoxyribonucleotide triphosphates (dNTPs) in the reaction and the bonding temperature of the primers.
Chemical classification
Real-time PCR technique can be classified by the chemistry used to detect the PCR product, specific or non-specific fluorochromes.
Non-specific detection: real-time PCR with double-stranded DNA-binding dyes as reporters
A DNA-binding dye binds to all double-stranded (ds) DNA in PCR, increasing the fluorescence quantum yield of the dye. An increase in DNA product during PCR therefore leads to an increase in fluorescence intensity measured at each cycle. However, dsDNA dyes such as SYBR Green will bind to all dsDNA PCR products, including nonspecific PCR products (such as primer dimer). This can potentially interfere with, or prevent, accurate monitoring of the intended target sequence.
In real-time PCR with dsDNA dyes the reaction is prepared as usual, with the addition of fluorescent dsDNA dye. Then the reaction is run in a real-time PCR instrument, and after each cycle, the intensity of fluorescence is measured with a detector; the dye only fluoresces when bound to the dsDNA (i.e., the PCR product).
This method has the advantage of only needing a pair of primers to carry out the amplification, which keeps costs down; multiple target sequences can be monitored in a tube by using different types of dyes.
Specific detection: fluorescent reporter probe method
Fluorescent reporter probes detect only the DNA containing the sequence complementary to the probe; therefore, use of the reporter probe significantly increases specificity, and enables performing the technique even in the presence of other dsDNA. Using different-coloured labels, fluorescent probes can be used in multiplex assays for monitoring several target sequences in the same tube. The specificity of fluorescent reporter probes also prevents interference of measurements caused by primer dimers, which are undesirable potential by-products in PCR. However, fluorescent reporter probes do not prevent the inhibitory effect of the primer dimers, which may depress accumulation of the desired products in the reaction.
The method relies on a DNA-based probe with a fluorescent reporter at one end and a quencher of fluorescence at the opposite end of the probe. The close proximity of the reporter to the quencher prevents detection of its fluorescence; breakdown of the probe by the 5' to 3' exonuclease activity of the Taq polymerase breaks the reporter-quencher proximity and thus allows unquenched emission of fluorescence, which can be detected after excitation with a laser. An increase in the product targeted by the reporter probe at each PCR cycle therefore causes a proportional increase in fluorescence due to the breakdown of the probe and release of the reporter.
The PCR is prepared as usual (see PCR), and the reporter probe is added.
As the reaction commences, during the annealing stage of the PCR both probe and primers anneal to the DNA target.
Polymerisation of a new DNA strand is initiated from the primers, and once the polymerase reaches the probe, its 5'-3'-exonuclease degrades the probe, physically separating the fluorescent reporter from the quencher, resulting in an increase in fluorescence.
Fluorescence is detected and measured in a real-time PCR machine, and its geometric increase corresponding to exponential increase of the product is used to determine the quantification cycle (Cq) in each reaction.
Fusion temperature analysis
Real-time PCR permits the identification of specific, amplified DNA fragments using analysis of their melting temperature (also called Tm value, from melting temperature). The method used is usually PCR with double-stranded DNA-binding dyes as reporters and the dye used is usually SYBR Green. The DNA melting temperature is specific to the amplified fragment. The results of this technique are obtained by comparing the dissociation curves of the analysed DNA samples.
Unlike conventional PCR, this method avoids the previous use of electrophoresis techniques to demonstrate the results of all the samples. This is because, despite being a kinetic technique, quantitative PCR is usually evaluated at a distinct end point. The technique therefore usually provides more rapid results and/or uses fewer reactants than electrophoresis. If subsequent electrophoresis is required it is only necessary to test those samples that real time PCR has shown to be doubtful and/or to ratify the results for samples that have tested positive for a specific determinant.
Modeling
Unlike end point PCR (conventional PCR), real time PCR allows monitoring of the desired product at any point in the amplification process by measuring fluorescence (in real time frame, measurement is made of its level over a given threshold). A commonly employed method of DNA quantification by real-time PCR relies on plotting fluorescence against the number of cycles on a logarithmic scale. A threshold for detection of DNA-based fluorescence is set 3–5 times of the standard deviation of the signal noise above background. The number of cycles at which the fluorescence exceeds the threshold is called the threshold cycle (Ct) or, according to the MIQE guidelines, quantification cycle (Cq).
During the exponential amplification phase, the quantity of the target DNA template (amplicon) doubles every cycle. For example, a DNA sample whose Cq precedes that of another sample by 3 cycles contained 23 = 8 times more template. However, the efficiency of amplification is often variable among primers and templates. Therefore, the efficiency of a primer-template combination is assessed in a titration experiment with serial dilutions of DNA template to create a standard curve of the change in (Cq) with each dilution. The slope of the linear regression is then used to determine the efficiency of amplification, which is 100% if a dilution of 1:2 results in a (Cq) difference of 1. The cycle threshold method makes several assumptions of reaction mechanism and has a reliance on data from low signal-to-noise regions of the amplification profile that can introduce substantial variance during the data analysis.
To quantify gene expression, the (Cq) for an RNA or DNA from the gene of interest is subtracted from the (Cq) of RNA/DNA from a housekeeping gene in the same sample to normalize for variation in the amount and quality of RNA between different samples. This normalization procedure is commonly called the ΔCt-method and permits comparison of expression of a gene of interest among different samples. However, for such comparison, expression of the normalizing reference gene needs to be very similar across all the samples. Choosing a reference gene fulfilling this criterion is therefore of high importance, and often challenging, because only very few genes show equal levels of expression across a range of different conditions or tissues. Although cycle threshold analysis is integrated with many commercial software systems, there are more accurate and reliable methods of analysing amplification profile data that should be considered in cases where reproducibility is a concern.
Mechanism-based qPCR quantification methods have also been suggested, and have the advantage that they do not require a standard curve for quantification. Methods such as MAK2 have been shown to have equal or better quantitative performance to standard curve methods. These mechanism-based methods use knowledge about the polymerase amplification process to generate estimates of the original sample concentration. An extension of this approach includes an accurate model of the entire PCR reaction profile, which allows for the use of high signal-to-noise data and the ability to validate data quality prior to analysis.
According to research of Ruijter et al. MAK2 assumes constant amplification efficiency during the PCR reaction. However, theoretical analysis of polymerase chain reaction, from which MAK2 was derived, has revealed that amplification efficiency is not constant throughout PCR. While MAK2 quantification provides reliable estimates of target DNA concentration in a sample under normal qPCR conditions, MAK2 does not reliably quantify target concentration for qPCR assays with competimeters.
Applications
There are numerous applications for quantitative polymerase chain reaction in the laboratory. It is commonly used for both diagnostic and basic research. Uses of the technique in industry include the quantification of microbial load in foods or on vegetable matter, the detection of GMOs (genetically modified organisms) and the quantification and genotyping of human viral pathogens.
Quantification of gene expression
Quantifying gene expression by traditional DNA detection methods is unreliable. Detection of mRNA on a northern blot or PCR products on a gel or Southern blot does not allow precise quantification. For example, over the 20–40 cycles of a typical PCR, the amount of DNA product reaches a plateau that is not directly correlated with the amount of target DNA in the initial PCR.
Real-time PCR can be used to quantify nucleic acids by two common methods: relative quantification and absolute quantification. Absolute quantification gives the exact number of target DNA molecules by comparison with DNA standards using a calibration curve. It is therefore essential that the PCR of the sample and the standard have the same amplification efficiency.
Relative quantification is based on internal reference genes to determine fold-differences in expression of the target gene. The quantification is expressed as the change in expression levels of mRNA interpreted as complementary DNA (cDNA, generated by reverse transcription of mRNA). Relative quantification is easier to carry out as it does not require a calibration curve as the amount of the studied gene is compared to the amount of a control reference gene.
As the units used to express the results of relative quantification are unimportant the results can be compared across a number of different RTqPCR. The reason for using one or more housekeeping genes is to correct non-specific variation, such as the differences in the quantity and quality of RNA used, which can affect the efficiency of reverse transcription and therefore that of the whole PCR process. However, the most crucial aspect of the process is that the reference gene must be stable.
The selection of these reference genes was traditionally carried out in molecular biology using qualitative or semi-quantitative studies such as the visual examination of RNA gels, northern blot densitometry or semi-quantitative PCR (PCR mimics). Now, in the genome era, it is possible to carry out a more detailed estimate for many organisms using transcriptomic technologies. However, research has shown that amplification of the majority of reference genes used in quantifying the expression of mRNA varies according to experimental conditions. It is therefore necessary to carry out an initial statistically sound methodological study in order to select the most suitable reference gene.
A number of statistical algorithms have been developed that can detect which gene or genes are most suitable for use under given conditions. Those like geNORM or BestKeeper can compare pairs or geometric means for a matrix of different reference genes and tissues.
Diagnostic uses
Diagnostic qualitative PCR is applied to rapidly detect nucleic acids that are diagnostic of, for example, infectious diseases, cancer and genetic abnormalities. The introduction of qualitative PCR assays to the clinical microbiology laboratory has significantly improved the diagnosis of infectious diseases, and is deployed as a tool to detect newly emerging diseases, such as new strains of flu and coronavirus, in diagnostic tests.
Microbiological uses
Quantitative PCR is also used by microbiologists working in the fields of food safety, food spoilage and fermentation and for the microbial risk assessment of water quality (drinking and recreational waters) and in public health protection.
qPCR may also be used to amplify taxonomic or functional markers of genes in DNA taken from environmental samples. Markers are represented by genetic fragments of DNA or complementary DNA. By amplifying a certain genetic element, one can quantify the amount of the element in the sample prior to amplification. Using taxonomic markers (ribosomal genes) and qPCR can help determine the amount of microorganisms in a sample, and can identify different families, genera, or species based on the specificity of the marker. Using functional markers (protein-coding genes) can show gene expression within a community, which may reveal information about the environment.
Detection of phytopathogens
The agricultural industry is constantly striving to produce plant propagules or seedlings that are free of pathogens in order to prevent economic losses and safeguard health. Systems have been developed that allow detection of small amounts of the DNA of Phytophthora ramorum, an oomycete that kills oaks and other species, mixed in with the DNA of the host plant. Discrimination between the DNA of the pathogen and the plant is based on the amplification of ITS sequences, spacers located in ribosomal RNA gene's coding area, which are characteristic for each taxon. Field-based versions of this technique have also been developed for identifying the same pathogen.
Detection of genetically modified organisms
qPCR using reverse transcription (RT-qPCR) can be used to detect GMOs given its sensitivity and dynamic range in detecting DNA. Alternatives such as DNA or protein analysis are usually less sensitive. Specific primers are used that amplify not the transgene but the promoter, terminator or even intermediate sequences used during the process of engineering the vector. As the process of creating a transgenic plant normally leads to the insertion of more than one copy of the transgene its quantity is also commonly assessed. This is often carried out by relative quantification using a control gene from the treated species that is only present as a single copy.
Clinical quantification and genotyping
Viruses can be present in humans due to direct infection or co-infections which makes diagnosis difficult using classical techniques and can result in an incorrect prognosis and treatment. The use of qPCR allows both the quantification and genotyping (characterization of the strain, carried out using melting curves) of a virus such as the hepatitis B virus. The degree of infection, quantified as the copies of the viral genome per unit of the patient's tissue, is relevant in many cases; for example, the probability that the type 1 herpes simplex virus reactivates is related to the number of infected neurons in the ganglia. This quantification is carried out either with reverse transcription or without it, as occurs if the virus becomes integrated in the human genome at any point in its cycle, such as happens in the case of HPV (human papillomavirus), where some of its variants are associated with the appearance of cervical cancer. Real-time PCR has also brought the quantization of human cytomegalovirus (CMV) which is seen in patients who are immunosuppressed following solid organ or bone marrow transplantation.
References
Bibliography
Molecular biology
Polymerase chain reaction
Real-time technology | Real-time polymerase chain reaction | Chemistry,Technology,Biology | 4,443 |
13,483 | https://en.wikipedia.org/wiki/Hemoglobin | Hemoglobin (haemoglobin, Hb or Hgb) is a protein containing iron that facilitates the transportation of oxygen in red blood cells. Almost all vertebrates contain hemoglobin, with the sole exception of the fish family Channichthyidae. Hemoglobin in the blood carries oxygen from the respiratory organs (lungs or gills) to the other tissues of the body, where it releases the oxygen to enable aerobic respiration which powers an animal's metabolism. A healthy human has 12to 20grams of hemoglobin in every 100mL of blood. Hemoglobin is a metalloprotein, a chromoprotein, and globulin.
In mammals, hemoglobin makes up about 96% of a red blood cell's dry weight (excluding water), and around 35% of the total weight (including water). Hemoglobin has an oxygen-binding capacity of 1.34mL of O2 per gram, which increases the total blood oxygen capacity seventy-fold compared to dissolved oxygen in blood plasma alone. The mammalian hemoglobin molecule can bind and transport up to four oxygen molecules.
Hemoglobin also transports other gases. It carries off some of the body's respiratory carbon dioxide (about 20–25% of the total) as carbaminohemoglobin, in which CO2 binds to the heme protein. The molecule also carries the important regulatory molecule nitric oxide bound to a thiol group in the globin protein, releasing it at the same time as oxygen.
Hemoglobin is also found in other cells, including in the A9 dopaminergic neurons of the substantia nigra, macrophages, alveolar cells, lungs, retinal pigment epithelium, hepatocytes, mesangial cells of the kidney, endometrial cells, cervical cells, and vaginal epithelial cells. In these tissues, hemoglobin absorbs unneeded oxygen as an antioxidant, and regulates iron metabolism. Excessive glucose in the blood can attach to hemoglobin and raise the level of hemoglobin A1c.
Hemoglobin and hemoglobin-like molecules are also found in many invertebrates, fungi, and plants. In these organisms, hemoglobins may carry oxygen, or they may transport and regulate other small molecules and ions such as carbon dioxide, nitric oxide, hydrogen sulfide and sulfide. A variant called leghemoglobin serves to scavenge oxygen away from anaerobic systems such as the nitrogen-fixing nodules of leguminous plants, preventing oxygen poisoning.
The medical condition hemoglobinemia, a form of anemia, is caused by intravascular hemolysis, in which hemoglobin leaks from red blood cells into the blood plasma.
Research history
In 1825, Johann Friedrich Engelhart discovered that the ratio of iron to protein is identical in the hemoglobins of several species. From the known atomic mass of iron, he calculated the molecular mass of hemoglobin to n × 16000 (n=number of iron atoms per hemoglobin molecule, now known to be 4), the first determination of a protein's molecular mass. This "hasty conclusion" drew ridicule from colleagues who could not believe that any molecule could be so large. However, Gilbert Smithson Adair confirmed Engelhart's results in 1925 by measuring the osmotic pressure of hemoglobin solutions.
Although blood had been known to carry oxygen since at least 1794, the oxygen-carrying property of hemoglobin was described by Hünefeld in 1840. In 1851, German physiologist Otto Funke published a series of articles in which he described growing hemoglobin crystals by successively diluting red blood cells with a solvent such as pure water, alcohol or ether, followed by slow evaporation of the solvent from the resulting protein solution. Hemoglobin's reversible oxygenation was described a few years later by Felix Hoppe-Seyler.
With the development of X-ray crystallography, it became possible to sequence protein structures. In 1959, Max Perutz determined the molecular structure of hemoglobin. For this work he shared the 1962 Nobel Prize in Chemistry with John Kendrew, who sequenced the globular protein myoglobin.
The role of hemoglobin in the blood was elucidated by French physiologist Claude Bernard.
The name hemoglobin (or haemoglobin) is derived from the words heme (or haem) and globin, reflecting the fact that each subunit of hemoglobin is a globular protein with an embedded heme group. Each heme group contains one iron atom, that can bind one oxygen molecule through ion-induced dipole forces. The most common type of hemoglobin in mammals contains four such subunits.
Genetics
Hemoglobin consists of protein subunits (globin molecules), which are polypeptides, long folded chains of specific amino acids which determine the protein's chemical properties and function. The amino acid sequence of any polypeptide is translated from a segment of DNA, the corresponding gene.
There is more than one hemoglobin gene. In humans, hemoglobin A (the main form of hemoglobin in adults) is coded by genes HBA1, HBA2, and HBB. Alpha 1 and alpha 2 subunits are respectively coded by genes HBA1 and HBA2 close together on chromosome 16, while the beta subunit is coded by gene HBB on chromosome 11. The amino acid sequences of the globin subunits usually differ between species, with the difference growing with evolutionary distance. For example, the most common hemoglobin sequences in humans, bonobos and chimpanzees are completely identical, with exactly the same alpha and beta globin protein chains. Human and gorilla hemoglobin differ in one amino acid in both alpha and beta chains, and these differences grow larger between less closely related species.
Mutations in the genes for hemoglobin can result in variants of hemoglobin within a single species, although one sequence is usually "most common" in each species. Many of these mutations cause no disease, but some cause a group of hereditary diseases called hemoglobinopathies. The best known hemoglobinopathy is sickle-cell disease, which was the first human disease whose mechanism was understood at the molecular level. A mostly separate set of diseases called thalassemias involves underproduction of normal and sometimes abnormal hemoglobins, through problems and mutations in globin gene regulation. All these diseases produce anemia.
Variations in hemoglobin sequences, as with other proteins, may be adaptive. For example, hemoglobin has been found to adapt in different ways to the thin air at high altitudes, where lower partial pressure of oxygen diminishes its binding to hemoglobin compared to the higher pressures at sea level. Recent studies of deer mice found mutations in four genes that can account for differences between high- and low-elevation populations. It was found that the genes of the two breeds are "virtually identical—except for those that govern the oxygen-carrying capacity of their hemoglobin. . . . The genetic difference enables highland mice to make more efficient use of their oxygen." Mammoth hemoglobin featured mutations that allowed for oxygen delivery at lower temperatures, thus enabling mammoths to migrate to higher latitudes during the Pleistocene. This was also found in hummingbirds that inhabit the Andes. Hummingbirds already expend a lot of energy and thus have high oxygen demands and yet Andean hummingbirds have been found to thrive in high altitudes. Non-synonymous mutations in the hemoglobin gene of multiple species living at high elevations (Oreotrochilus, A. castelnaudii, C. violifer, P. gigas, and A. viridicuada) have caused the protein to have less of an affinity for inositol hexaphosphate (IHP), a molecule found in birds that has a similar role as 2,3-BPG in humans; this results in the ability to bind oxygen in lower partial pressures.
Birds' unique circulatory lungs also promote efficient use of oxygen at low partial pressures of O2. These two adaptations reinforce each other and account for birds' remarkable high-altitude performance.
Hemoglobin adaptation extends to humans, as well. There is a higher offspring survival rate among Tibetan women with high oxygen saturation genotypes residing at 4,000 m. Natural selection seems to be the main force working on this gene because the mortality rate of offspring is significantly lower for women with higher hemoglobin-oxygen affinity when compared to the mortality rate of offspring from women with low hemoglobin-oxygen affinity. While the exact genotype and mechanism by which this occurs is not yet clear, selection is acting on these women's ability to bind oxygen in low partial pressures, which overall allows them to better sustain crucial metabolic processes.
Synthesis
Hemoglobin (Hb) is synthesized in a complex series of steps. The heme part is synthesized in a series of steps in the mitochondria and the cytosol of immature red blood cells, while the globin protein parts are synthesized by ribosomes in the cytosol. Production of Hb continues in the cell throughout its early development from the proerythroblast to the reticulocyte in the bone marrow. At this point, the nucleus is lost in mammalian red blood cells, but not in birds and many other species. Even after the loss of the nucleus in mammals, residual ribosomal RNA allows further synthesis of Hb until the reticulocyte loses its RNA soon after entering the vasculature (this hemoglobin-synthetic RNA in fact gives the reticulocyte its reticulated appearance and name).
Structure of heme
Hemoglobin has a quaternary structure characteristic of many multi-subunit globular proteins. Most of the amino acids in hemoglobin form alpha helices, and these helices are connected by short non-helical segments. Hydrogen bonds stabilize the helical sections inside this protein, causing attractions within the molecule, which then causes each polypeptide chain to fold into a specific shape. Hemoglobin's quaternary structure comes from its four subunits in roughly a tetrahedral arrangement.
In most vertebrates, the hemoglobin molecule is an assembly of four globular protein subunits. Each subunit is composed of a protein chain tightly associated with a non-protein prosthetic heme group. Each protein chain arranges into a set of alpha-helix structural segments connected together in a globin fold arrangement. Such a name is given because this arrangement is the same folding motif used in other heme/globin proteins such as myoglobin. This folding pattern contains a pocket that strongly binds the heme group.
A heme group consists of an iron (Fe) ion held in a heterocyclic ring, known as a porphyrin. This porphyrin ring consists of four pyrrole molecules cyclically linked together (by methine bridges) with the iron ion bound in the center. The iron ion, which is the site of oxygen binding, coordinates with the four nitrogen atoms in the center of the ring, which all lie in one plane. The heme is bound strongly (covalently) to the globular protein via the N atoms of the imidazole ring of F8 histidine residue (also known as the proximal histidine) below the porphyrin ring. A sixth position can reversibly bind oxygen by a coordinate covalent bond, completing the octahedral group of six ligands. This reversible bonding with oxygen is why hemoglobin is so useful for transporting oxygen around the body. Oxygen binds in an "end-on bent" geometry where one oxygen atom binds to Fe and the other protrudes at an angle. When oxygen is not bound, a very weakly bonded water molecule fills the site, forming a distorted octahedron.
Even though carbon dioxide is carried by hemoglobin, it does not compete with oxygen for the iron-binding positions but is bound to the amine groups of the protein chains attached to the heme groups.
The iron ion may be either in the ferrous Fe2+ or in the ferric Fe3+ state, but ferrihemoglobin (methemoglobin) (Fe3+) cannot bind oxygen. In binding, oxygen temporarily and reversibly oxidizes (Fe2+) to (Fe3+) while oxygen temporarily turns into the superoxide ion, thus iron must exist in the +2 oxidation state to bind oxygen. If superoxide ion associated to Fe3+ is protonated, the hemoglobin iron will remain oxidized and incapable of binding oxygen. In such cases, the enzyme methemoglobin reductase will be able to eventually reactivate methemoglobin by reducing the iron center.
In adult humans, the most common hemoglobin type is a tetramer (which contains four subunit proteins) called hemoglobin A, consisting of two α and two β subunits non-covalently bound, each made of 141 and 146 amino acid residues, respectively. This is denoted as α2β2. The subunits are structurally similar and about the same size. Each subunit has a molecular weight of about 16,000 daltons, for a total molecular weight of the tetramer of about 64,000 daltons (64,458 g/mol). Thus, 1 g/dL=0.1551 mmol/L. Hemoglobin A is the most intensively studied of the hemoglobin molecules.
In human infants, the fetal hemoglobin molecule is made up of 2 α chains and 2 γ chains. The γ chains are gradually replaced by β chains as the infant grows.
The four polypeptide chains are bound to each other by salt bridges, hydrogen bonds, and the hydrophobic effect.
Oxygen saturation
In general, hemoglobin can be saturated with oxygen molecules (oxyhemoglobin), or desaturated with oxygen molecules (deoxyhemoglobin).
Oxyhemoglobin
Oxyhemoglobin is formed during physiological respiration when oxygen binds to the heme component of the protein hemoglobin in red blood cells. This process occurs in the pulmonary capillaries adjacent to the alveoli of the lungs. The oxygen then travels through the blood stream to be dropped off at cells where it is utilized as a terminal electron acceptor in the production of ATP by the process of oxidative phosphorylation. It does not, however, help to counteract a decrease in blood pH. Ventilation, or breathing, may reverse this condition by removal of carbon dioxide, thus causing a shift up in pH.
Hemoglobin exists in two forms, a taut (tense) form (T) and a relaxed form (R). Various factors such as low pH, high CO2 and high 2,3 BPG at the level of the tissues favor the taut form, which has low oxygen affinity and releases oxygen in the tissues. Conversely, a high pH, low CO2, or low 2,3 BPG favors the relaxed form, which can better bind oxygen. The partial pressure of the system also affects O2 affinity where, at high partial pressures of oxygen (such as those present in the alveoli), the relaxed (high affinity, R) state is favoured. Inversely, at low partial pressures (such as those present in respiring tissues), the (low affinity, T) tense state is favoured. Additionally, the binding of oxygen to the iron(II) heme pulls the iron into the plane of the porphyrin ring, causing a slight conformational shift. The shift encourages oxygen to bind to the three remaining heme units within hemoglobin (thus, oxygen binding is cooperative).
Classically, the iron in oxyhemoglobin is seen as existing in the iron(II) oxidation state. However, the complex of oxygen with heme iron is diamagnetic, whereas both oxygen and high-spin iron(II) are paramagnetic. Experimental evidence strongly suggests heme iron is in the iron(III) oxidation state in oxyhemoglobin, with the oxygen existing as superoxide anion (O2•−) or in a covalent charge-transfer complex.
Deoxygenated hemoglobin
Deoxygenated hemoglobin (deoxyhemoglobin) is the form of hemoglobin without the bound oxygen. The absorption spectra of oxyhemoglobin and deoxyhemoglobin differ. The oxyhemoglobin has significantly lower absorption of the 660 nm wavelength than deoxyhemoglobin, while at 940 nm its absorption is slightly higher. This difference is used for the measurement of the amount of oxygen in a patient's blood by an instrument called a pulse oximeter. This difference also accounts for the presentation of cyanosis, the blue to purplish color that tissues develop during hypoxia.
Deoxygenated hemoglobin is paramagnetic; it is weakly attracted to magnetic fields. In contrast, oxygenated hemoglobin exhibits diamagnetism, a weak repulsion from a magnetic field.
Evolution of vertebrate hemoglobin
Scientists agree that the event that separated myoglobin from hemoglobin occurred after lampreys diverged from jawed vertebrates. This separation of myoglobin and hemoglobin allowed for the different functions of the two molecules to arise and develop: myoglobin has more to do with oxygen storage while hemoglobin is tasked with oxygen transport. The α- and β-like globin genes encode the individual subunits of the protein. The predecessors of these genes arose through another duplication event also after the gnathosome common ancestor derived from jawless fish, approximately 450–500 million years ago. Ancestral reconstruction studies suggest that the preduplication ancestor of the α and β genes was a dimer made up of identical globin subunits, which then evolved to assemble into a tetrameric architecture after the duplication. The development of α and β genes created the potential for hemoglobin to be composed of multiple distinct subunits, a physical composition central to hemoglobin's ability to transport oxygen. Having multiple subunits contributes to hemoglobin's ability to bind oxygen cooperatively as well as be regulated allosterically. Subsequently, the α gene also underwent a duplication event to form the HBA1 and HBA2 genes. These further duplications and divergences have created a diverse range of α- and β-like globin genes that are regulated so that certain forms occur at different stages of development.
Most ice fish of the family Channichthyidae have lost their hemoglobin genes as an adaptation to cold water.
Cooperativity
When oxygen binds to the iron complex, it causes the iron atom to move back toward the center of the plane of the porphyrin ring (see moving diagram). At the same time, the imidazole side-chain of the histidine residue interacting at the other pole of the iron is pulled toward the porphyrin ring. This interaction forces the plane of the ring sideways toward the outside of the tetramer, and also induces a strain in the protein helix containing the histidine as it moves nearer to the iron atom. This strain is transmitted to the remaining three monomers in the tetramer, where it induces a similar conformational change in the other heme sites such that binding of oxygen to these sites becomes easier.
As oxygen binds to one monomer of hemoglobin, the tetramer's conformation shifts from the T (tense) state to the R (relaxed) state. This shift promotes the binding of oxygen to the remaining three monomers' heme groups, thus saturating the hemoglobin molecule with oxygen.
In the tetrameric form of normal adult hemoglobin, the binding of oxygen is, thus, a cooperative process. The binding affinity of hemoglobin for oxygen is increased by the oxygen saturation of the molecule, with the first molecules of oxygen bound influencing the shape of the binding sites for the next ones, in a way favorable for binding. This positive cooperative binding is achieved through steric conformational changes of the hemoglobin protein complex as discussed above; i.e., when one subunit protein in hemoglobin becomes oxygenated, a conformational or structural change in the whole complex is initiated, causing the other subunits to gain an increased affinity for oxygen. As a consequence, the oxygen binding curve of hemoglobin is sigmoidal, or S-shaped, as opposed to the normal hyperbolic curve associated with noncooperative binding.
The dynamic mechanism of the cooperativity in hemoglobin and its relation with low-frequency resonance has been discussed.
Binding of ligands other than oxygen
Besides the oxygen ligand, which binds to hemoglobin in a cooperative manner, hemoglobin ligands also include competitive inhibitors such as carbon monoxide (CO) and allosteric ligands such as carbon dioxide (CO2) and nitric oxide (NO). The carbon dioxide is bound to amino groups of the globin proteins to form carbaminohemoglobin; this mechanism is thought to account for about 10% of carbon dioxide transport in mammals. Nitric oxide can also be transported by hemoglobin; it is bound to specific thiol groups in the globin protein to form an S-nitrosothiol, which dissociates into free nitric oxide and thiol again, as the hemoglobin releases oxygen from its heme site. This nitric oxide transport to peripheral tissues is hypothesized to assist oxygen transport in tissues, by releasing vasodilatory nitric oxide to tissues in which oxygen levels are low.
Competitive
The binding of oxygen is affected by molecules such as carbon monoxide (for example, from tobacco smoking, exhaust gas, and incomplete combustion in furnaces). CO competes with oxygen at the heme binding site. Hemoglobin's binding affinity for CO is 250 times greater than its affinity for oxygen, Since carbon monoxide is a colorless, odorless and tasteless gas, and poses a potentially fatal threat, carbon monoxide detectors have become commercially available to warn of dangerous levels in residences. When hemoglobin combines with CO, it forms a very bright red compound called carboxyhemoglobin, which may cause the skin of CO poisoning victims to appear pink in death, instead of white or blue. When inspired air contains CO levels as low as 0.02%, headache and nausea occur; if the CO concentration is increased to 0.1%, unconsciousness will follow. In heavy smokers, up to 20% of the oxygen-active sites can be blocked by CO.
In similar fashion, hemoglobin also has competitive binding affinity for cyanide (CN−), sulfur monoxide (SO), and sulfide (S2−), including hydrogen sulfide (H2S). All of these bind to iron in heme without changing its oxidation state, but they nevertheless inhibit oxygen-binding, causing grave toxicity.
The iron atom in the heme group must initially be in the ferrous (Fe2+) oxidation state to support oxygen and other gases' binding and transport (it temporarily switches to ferric during the time oxygen is bound, as explained above). Initial oxidation to the ferric (Fe3+) state without oxygen converts hemoglobin into "hemiglobin" or methemoglobin, which cannot bind oxygen. Hemoglobin in normal red blood cells is protected by a reduction system to keep this from happening. Nitric oxide is capable of converting a small fraction of hemoglobin to methemoglobin in red blood cells. The latter reaction is a remnant activity of the more ancient nitric oxide dioxygenase function of globins.
Allosteric
Carbon dioxide occupies a different binding site on the hemoglobin. At tissues, where carbon dioxide concentration is higher, carbon dioxide binds to allosteric site of hemoglobin, facilitating unloading of oxygen from hemoglobin and ultimately its removal from the body after the oxygen has been released to tissues undergoing metabolism. This increased affinity for carbon dioxide by the venous blood is known as the Bohr effect. Through the enzyme carbonic anhydrase, carbon dioxide reacts with water to give carbonic acid, which decomposes into bicarbonate and protons:
CO2 + H2O → H2CO3 → HCO3− + H+
Hence, blood with high carbon dioxide levels is also lower in pH (more acidic). Hemoglobin can bind protons and carbon dioxide, which causes a conformational change in the protein and facilitates the release of oxygen. Protons bind at various places on the protein, while carbon dioxide binds at the α-amino group. Carbon dioxide binds to hemoglobin and forms carbaminohemoglobin. This decrease in hemoglobin's affinity for oxygen by the binding of carbon dioxide and acid is known as the Bohr effect. The Bohr effect favors the T state rather than the R state. (shifts the O2-saturation curve to the right). Conversely, when the carbon dioxide levels in the blood decrease (i.e., in the lung capillaries), carbon dioxide and protons are released from hemoglobin, increasing the oxygen affinity of the protein. A reduction in the total binding capacity of hemoglobin to oxygen (i.e. shifting the curve down, not just to the right) due to reduced pH is called the root effect. This is seen in bony fish.
It is necessary for hemoglobin to release the oxygen that it binds; if not, there is no point in binding it. The sigmoidal curve of hemoglobin makes it efficient in binding (taking up O2 in lungs), and efficient in unloading (unloading O2 in tissues).
In people acclimated to high altitudes, the concentration of 2,3-Bisphosphoglycerate (2,3-BPG) in the blood is increased, which allows these individuals to deliver a larger amount of oxygen to tissues under conditions of lower oxygen tension. This phenomenon, where molecule Y affects the binding of molecule X to a transport molecule Z, is called a heterotropic allosteric effect. Hemoglobin in organisms at high altitudes has also adapted such that it has less of an affinity for 2,3-BPG and so the protein will be shifted more towards its R state. In its R state, hemoglobin will bind oxygen more readily, thus allowing organisms to perform the necessary metabolic processes when oxygen is present at low partial pressures.
Animals other than humans use different molecules to bind to hemoglobin and change its O2 affinity under unfavorable conditions. Fish use both ATP and GTP. These bind to a phosphate "pocket" on the fish hemoglobin molecule, which stabilizes the tense state and therefore decreases oxygen affinity. GTP reduces hemoglobin oxygen affinity much more than ATP, which is thought to be due to an extra hydrogen bond formed that further stabilizes the tense state. Under hypoxic conditions, the concentration of both ATP and GTP is reduced in fish red blood cells to increase oxygen affinity.
A variant hemoglobin, called fetal hemoglobin (HbF, α2γ2), is found in the developing fetus, and binds oxygen with greater affinity than adult hemoglobin. This means that the oxygen binding curve for fetal hemoglobin is left-shifted (i.e., a higher percentage of hemoglobin has oxygen bound to it at lower oxygen tension), in comparison to that of adult hemoglobin. As a result, fetal blood in the placenta is able to take oxygen from maternal blood.
Hemoglobin also carries nitric oxide (NO) in the globin part of the molecule. This improves oxygen delivery in the periphery and contributes to the control of respiration. NO binds reversibly to a specific cysteine residue in globin; the binding depends on the state (R or T) of the hemoglobin. The resulting S-nitrosylated hemoglobin influences various NO-related activities such as the control of vascular resistance, blood pressure and respiration. NO is not released in the cytoplasm of red blood cells but transported out of them by an anion exchanger called AE1.
Types of hemoglobin in humans
Hemoglobin variants are a part of the normal embryonic and fetal development. They may also be pathologic mutant forms of hemoglobin in a population, caused by variations in genetics. Some well-known hemoglobin variants, such as sickle-cell anemia, are responsible for diseases and are considered hemoglobinopathies. Other variants cause no detectable pathology, and are thus considered non-pathological variants.
In embryos:
Gower 1 (ζ2ε2).
Gower 2 (α2ε2) ().
Hemoglobin Portland I (ζ2γ2).
Hemoglobin Portland II (ζ2β2).
In fetuses:
Hemoglobin F (α2γ2) ().
In neonates (newborns inmmediately after birth):
Hemoglobin A (adult hemoglobin) (α2β2) () – The most common with a normal amount over 95%
Hemoglobin A2 (α2δ2) – δ chain synthesis begins late in the third trimester and, in adults, it has a normal range of 1.5–3.5%
Hemoglobin F (fetal hemoglobin) (α2γ2) – In adults Hemoglobin F is restricted to a limited population of red cells called F-cells. However, the level of Hb F can be elevated in persons with sickle-cell disease and beta-thalassemia.
Abnormal forms that occur in diseases:
Hemoglobin D – (α2βD2) – A variant form of hemoglobin.
Hemoglobin H (β4) – A variant form of hemoglobin, formed by a tetramer of β chains, which may be present in variants of α thalassemia.
Hemoglobin Barts (γ4) – A variant form of hemoglobin, formed by a tetramer of γ chains, which may be present in variants of α thalassemia.
Hemoglobin S (α2βS2) – A variant form of hemoglobin found in people with sickle cell disease. There is a variation in the β-chain gene, causing a change in the properties of hemoglobin, which results in sickling of red blood cells.
Hemoglobin C (α2βC2) – Another variant due to a variation in the β-chain gene. This variant causes a mild chronic hemolytic anemia.
Hemoglobin E (α2βE2) – Another variant due to a variation in the β-chain gene. This variant causes a mild chronic hemolytic anemia.
Hemoglobin AS – A heterozygous form causing sickle cell trait with one adult gene and one sickle cell disease gene
Hemoglobin SC disease – A compound heterozygous form with one sickle gene and another encoding hemoglobin C.
Hemoglobin Hopkins-2 – A variant form of hemoglobin that is sometimes viewed in combination with hemoglobin S to produce sickle cell disease.
Degradation in vertebrate animals
When red blood cells reach the end of their life due to aging or defects, they are removed from the circulation by the phagocytic activity of macrophages in the spleen or the liver or hemolyze within the circulation. Free hemoglobin is then cleared from the circulation via the hemoglobin transporter CD163, which is exclusively expressed on monocytes or macrophages. Within these cells the hemoglobin molecule is broken up, and the iron gets recycled. This process also produces one molecule of carbon monoxide for every molecule of heme degraded. Heme degradation is the only natural source of carbon monoxide in the human body, and is responsible for the normal blood levels of carbon monoxide in people breathing normal air.
The other major final product of heme degradation is bilirubin. Increased levels of this chemical are detected in the blood if red blood cells are being destroyed more rapidly than usual. Improperly degraded hemoglobin protein or hemoglobin that has been released from the blood cells too rapidly can clog small blood vessels, especially the delicate blood filtering vessels of the kidneys, causing kidney damage. Iron is removed from heme and salvaged for later use, it is stored as hemosiderin or ferritin in tissues and transported in plasma by beta globulins as transferrins. When the porphyrin ring is broken up, the fragments are normally secreted as a yellow pigment called bilirubin, which is secreted into the intestines as bile. Intestines metabolize bilirubin into urobilinogen. Urobilinogen leaves the body in faeces, in a pigment called stercobilin. Globulin is metabolized into amino acids that are then released into circulation.
Diseases related to hemoglobin
Hemoglobin deficiency can be caused either by a decreased amount of hemoglobin molecules, as in anemia, or by decreased ability of each molecule to bind oxygen at the same partial pressure of oxygen. Hemoglobinopathies (genetic defects resulting in abnormal structure of the hemoglobin molecule) may cause both. In any case, hemoglobin deficiency decreases blood oxygen-carrying capacity. Hemoglobin deficiency is, in general, strictly distinguished from hypoxemia, defined as decreased partial pressure of oxygen in blood, although both are causes of hypoxia (insufficient oxygen supply to tissues).
Other common causes of low hemoglobin include loss of blood, nutritional deficiency, bone marrow problems, chemotherapy, kidney failure, or abnormal hemoglobin (such as that of sickle-cell disease).
The ability of each hemoglobin molecule to carry oxygen is normally modified by altered blood pH or CO2, causing an altered oxygen–hemoglobin dissociation curve. However, it can also be pathologically altered in, e.g., carbon monoxide poisoning.
Decrease of hemoglobin, with or without an absolute decrease of red blood cells, leads to symptoms of anemia. Anemia has many different causes, although iron deficiency and its resultant iron deficiency anemia are the most common causes in the Western world. As absence of iron decreases heme synthesis, red blood cells in iron deficiency anemia are hypochromic (lacking the red hemoglobin pigment) and microcytic (smaller than normal). Other anemias are rarer. In hemolysis (accelerated breakdown of red blood cells), associated jaundice is caused by the hemoglobin metabolite bilirubin, and the circulating hemoglobin can cause kidney failure.
Some mutations in the globin chain are associated with the hemoglobinopathies, such as sickle-cell disease and thalassemia. Other mutations, as discussed at the beginning of the article, are benign and are referred to merely as hemoglobin variants.
There is a group of genetic disorders, known as the porphyrias that are characterized by errors in metabolic pathways of heme synthesis. King George III of the United Kingdom was probably the most famous porphyria sufferer.
To a small extent, hemoglobin A slowly combines with glucose at the terminal valine (an alpha aminoacid) of each β chain. The resulting molecule is often referred to as Hb A1c, a glycated hemoglobin. The binding of glucose to amino acids in the hemoglobin takes place spontaneously (without the help of an enzyme) in many proteins, and is not known to serve a useful purpose. However, as the concentration of glucose in the blood increases, the percentage of Hb A that turns into Hb A1c increases. In diabetics whose glucose usually runs high, the percent Hb A1c also runs high. Because of the slow rate of Hb A combination with glucose, the Hb A1c percentage reflects a weighted average of blood glucose levels over the lifetime of red cells, which is approximately 120 days. The levels of glycated hemoglobin are therefore measured in order to monitor the long-term control of the chronic disease of type 2 diabetes mellitus (T2DM). Poor control of T2DM results in high levels of glycated hemoglobin in the red blood cells. The normal reference range is approximately 4.0–5.9%. Though difficult to obtain, values less than 7% are recommended for people with T2DM. Levels greater than 9% are associated with poor control of the glycated hemoglobin, and levels greater than 12% are associated with very poor control. Diabetics who keep their glycated hemoglobin levels close to 7% have a much better chance of avoiding the complications that may accompany diabetes (than those whose levels are 8% or higher). In addition, increased glycated of hemoglobin increases its affinity for oxygen, therefore preventing its release at the tissue and inducing a level of hypoxia in extreme cases.
Elevated levels of hemoglobin are associated with increased numbers or sizes of red blood cells, called polycythemia. This elevation may be caused by congenital heart disease, cor pulmonale, pulmonary fibrosis, too much erythropoietin, or polycythemia vera. High hemoglobin levels may also be caused by exposure to high altitudes, smoking, dehydration (artificially by concentrating Hb), advanced lung disease and certain tumors.
Diagnostic uses
Hemoglobin concentration measurement is among the most commonly performed blood tests, usually as part of a complete blood count. For example, it is typically tested before or after blood donation. Results are reported in g/L, g/dL or mol/L. 1 g/dL equals about 0.6206 mmol/L, although the latter units are not used as often due to uncertainty regarding the polymeric state of the molecule. This conversion factor, using the single globin unit molecular weight of 16,000 Da, is more common for hemoglobin concentration in blood. For MCHC (mean corpuscular hemoglobin concentration) the conversion factor 0.155, which uses the tetramer weight of 64,500 Da, is more common. Normal levels are:
Men: 13.8 to 18.0 g/dL (138 to 180 g/L, or 8.56 to 11.17 mmol/L)
Women: 12.1 to 15.1 g/dL (121 to 151 g/L, or 7.51 to 9.37 mmol/L)
Children: 11 to 16 g/dL (110 to 160 g/L, or 6.83 to 9.93 mmol/L)
Pregnant women: 11 to 14 g/dL (110 to 140 g/L, or 6.83 to 8.69 mmol/L) (9.5 to 15 usual value during pregnancy)
Normal values of hemoglobin in the 1st and 3rd trimesters of pregnant women must be at least 11 g/dL and at least 10.5 g/dL during the 2nd trimester.
Dehydration or hyperhydration can greatly influence measured hemoglobin levels. Albumin can indicate hydration status.
If the concentration is below normal, this is called anemia. Anemias are classified by the size of red blood cells, the cells that contain hemoglobin in vertebrates. The anemia is called "microcytic" if red cells are small, "macrocytic" if they are large, and "normocytic" otherwise.
Hematocrit, the proportion of blood volume occupied by red blood cells, is typically about three times the hemoglobin concentration measured in g/dL. For example, if the hemoglobin is measured at 17 g/dL, that compares with a hematocrit of 51%.
Laboratory hemoglobin test methods require a blood sample (arterial, venous, or capillary) and analysis on hematology analyzer and CO-oximeter. Additionally, a new noninvasive hemoglobin (SpHb) test method called Pulse CO-Oximetry is also available with comparable accuracy to invasive methods.
Concentrations of oxy- and deoxyhemoglobin can be measured continuously, regionally and noninvasively using NIRS. NIRS can be used both on the head and on muscles. This technique is often used for research in e.g. elite sports training, ergonomics, rehabilitation, patient monitoring, neonatal research, functional brain monitoring, brain–computer interface, urology (bladder contraction), neurology (Neurovascular coupling) and more.
Hemoglobin mass can be measured in humans using the non-radioactive, carbon monoxide (CO) rebreathing technique that has been used for more than 100 years. With this technique, a small volume of pure CO gas is inhaled and rebreathed for a few minutes. During rebreathing, CO binds to hemoglobin present in red blood cells. Based on the increase in blood CO after the rebreathing period, the hemoglobin mass can be determined through the dilution principle.
Long-term control of blood sugar concentration can be measured by the concentration of Hb A1c. Measuring it directly would require many samples because blood sugar levels vary widely through the day. Hb A1c is the product of the irreversible reaction of hemoglobin A with glucose. A higher glucose concentration results in more Hb A1c. Because the reaction is slow, the Hb A1c proportion represents glucose level in blood averaged over the half-life of red blood cells, is typically ~120 days. An Hb A1c proportion of 6.0% or less show good long-term glucose control, while values above 7.0% are elevated. This test is especially useful for diabetics.
The functional magnetic resonance imaging (fMRI) machine uses the signal from deoxyhemoglobin, which is sensitive to magnetic fields since it is paramagnetic. Combined measurement with NIRS shows good correlation with both the oxy- and deoxyhemoglobin signal compared to the BOLD signal.
Athletic tracking and self-tracking uses
Hemoglobin can be tracked noninvasively, to build an individual data set tracking the hemoconcentration and hemodilution effects of daily activities for better understanding of sports performance and training. Athletes are often concerned about endurance and intensity of exercise. The sensor uses light-emitting diodes that emit red and infrared light through the tissue to a light detector, which then sends a signal to a processor to calculate the absorption of light by the hemoglobin protein. This sensor is similar to a pulse oximeter, which consists of a small sensing device that clips to the finger.
Analogues in non-vertebrate organisms
A variety of oxygen-transport and -binding proteins exist in organisms throughout the animal and plant kingdoms. Organisms including bacteria, protozoans, and fungi all have hemoglobin-like proteins whose known and predicted roles include the reversible binding of gaseous ligands. Since many of these proteins contain globins and the heme moiety (iron in a flat porphyrin support), they are often called hemoglobins, even if their overall tertiary structure is very different from that of vertebrate hemoglobin. In particular, the distinction of "myoglobin" and hemoglobin in lower animals is often impossible, because some of these organisms do not contain muscles. Or, they may have a recognizable separate circulatory system but not one that deals with oxygen transport (for example, many insects and other arthropods). In all these groups, heme/globin-containing molecules (even monomeric globin ones) that deal with gas-binding are referred to as oxyhemoglobins. In addition to dealing with transport and sensing of oxygen, they may also deal with NO, CO2, sulfide compounds, and even O2 scavenging in environments that must be anaerobic. They may even deal with detoxification of chlorinated materials in a way analogous to heme-containing P450 enzymes and peroxidases.
The structure of hemoglobins varies across species. Hemoglobin occurs in all kingdoms of organisms, but not in all organisms. Primitive species such as bacteria, protozoa, algae, and plants often have single-globin hemoglobins. Many nematode worms, molluscs, and crustaceans contain very large multisubunit molecules, much larger than those in vertebrates. In particular, chimeric hemoglobins found in fungi and giant annelids may contain both globin and other types of proteins.
One of the most striking occurrences and uses of hemoglobin in organisms is in the giant tube worm (Riftia pachyptila, also called Vestimentifera), which can reach 2.4 meters length and populates ocean volcanic vents. Instead of a digestive tract, these worms contain a population of bacteria constituting half the organism's weight. The bacteria oxidize H2S from the vent with O2 from the water to produce energy to make food from H2O and CO2. The worms' upper end is a deep-red fan-like structure ("plume"), which extends into the water and absorbs H2S and O2 for the bacteria, and CO2 for use as synthetic raw material similar to photosynthetic plants. The structures are bright red due to their content of several extraordinarily complex hemoglobins that have up to 144 globin chains, each including associated heme structures. These hemoglobins are remarkable for being able to carry oxygen in the presence of sulfide, and even to carry sulfide, without being completely "poisoned" or inhibited by it as hemoglobins in most other species are.
Other oxygen-binding proteins
Myoglobin Found in the muscle tissue of many vertebrates, including humans, it gives muscle tissue a distinct red or dark gray color. It is very similar to hemoglobin in structure and sequence, but is not a tetramer; instead, it is a monomer that lacks cooperative binding. It is used to store oxygen rather than transport it.
Hemocyanin The second most common oxygen-transporting protein found in nature, it is found in the blood of many arthropods and molluscs. Uses copper prosthetic groups instead of iron heme groups and is blue in color when oxygenated.
Hemerythrin Some marine invertebrates and a few species of annelid use this iron-containing non-heme protein to carry oxygen in their blood. Appears pink/violet when oxygenated, clear when not.
Chlorocruorin Found in many annelids, it is very similar to erythrocruorin, but the heme group is significantly different in structure. Appears green when deoxygenated and red when oxygenated.
Vanabins Also known as vanadium chromagens, they are found in the blood of sea squirts. They were once hypothesized to use the metal vanadium as an oxygen binding prosthetic group. However, although they do contain vanadium by preference, they apparently bind little oxygen, and thus have some other function, which has not been elucidated (sea squirts also contain some hemoglobin). They may act as toxins.
Erythrocruorin Found in many annelids, including earthworms, it is a giant free-floating blood protein containing many dozens—possibly hundreds—of iron- and heme-bearing protein subunits bound together into a single protein complex with a molecular mass greater than 3.5 million daltons.
Leghemoglobin In leguminous plants, such as alfalfa or soybeans, the nitrogen fixing bacteria in the roots are protected from oxygen by this iron heme containing oxygen-binding protein. The specific enzyme protected is nitrogenase, which is unable to reduce nitrogen gas in the presence of free oxygen.
Coboglobin A synthetic cobalt-based porphyrin. Coboprotein would appear colorless when oxygenated, but yellow when in veins.
Presence in nonerythroid cells
Some nonerythroid cells (i.e., cells other than the red blood cell line) contain hemoglobin. In the brain, these include the A9 dopaminergic neurons in the substantia nigra, astrocytes in the cerebral cortex and hippocampus, and in all mature oligodendrocytes. It has been suggested that brain hemoglobin in these cells may enable the "storage of oxygen to provide a homeostatic mechanism in anoxic conditions, which is especially important for A9 DA neurons that have an elevated metabolism with a high requirement for energy production". It has been noted further that "A9 dopaminergic neurons may be at particular risk of anoxic degeneration since in addition to their high mitochondrial activity they are under intense oxidative stress caused by the production of hydrogen peroxide via autoxidation and/or monoamine oxidase (MAO)-mediated deamination of dopamine and the subsequent reaction of accessible ferrous iron to generate highly toxic hydroxyl radicals". This may explain the risk of degeneration of these cells in Parkinson's disease. The hemoglobin-derived iron in these cells is not the cause of the post-mortem darkness of these cells (origin of the Latin name, substantia nigra), but rather is due to neuromelanin.
Outside the brain, hemoglobin has non-oxygen-carrying functions as an antioxidant and a regulator of iron metabolism in macrophages, alveolar cells, and mesangial cells in the kidney.
In history, art, and music
Historically, an association between the color of blood and rust occurs in the association of the planet Mars, with the Roman god of war, since the planet is an orange-red, which reminded the ancients of blood. Although the color of the planet is due to iron compounds in combination with oxygen in the Martian soil, it is a common misconception that the iron in hemoglobin and its oxides gives blood its red color. The color is actually due to the porphyrin moiety of hemoglobin to which the iron is bound, not the iron itself, although the ligation and redox state of the iron can influence the pi to pi* or n to pi* electronic transitions of the porphyrin and hence its optical characteristics.
Artist Julian Voss-Andreae created a sculpture called Heart of Steel (Hemoglobin) in 2005, based on the protein's backbone. The sculpture was made from glass and weathering steel. The intentional rusting of the initially shiny work of art mirrors hemoglobin's fundamental chemical reaction of oxygen binding to iron.
Montreal artist Nicolas Baier created Lustre (Hémoglobine), a sculpture in stainless steel that shows the structure of the hemoglobin molecule. It is displayed in the atrium of McGill University Health Centre's research centre in Montreal. The sculpture measures about 10 metres × 10 metres × 10 metres.
See also
Carbaminohemoglobin (Hb associated with )
Carboxyhemoglobin (Hb associated with CO)
Chlorophyll (Mg heme)
Complete blood count
Delta globin
Hemoglobinometer
Hemoprotein
Methemoglobin (ferric Hb, or ferrihemoglobin)
Oxyhemoglobin (with diatomic oxygen, colored blood-red)
Tegillarca granosa - "blood clam"
Vaska's complex – iridium organometallic complex notable for its ability to bind to O2 reversibly
References
Notes
Sources
Further reading
Hazelwood, Loren (2001) Can't Live Without It: The story of hemoglobin in sickness and in health, Nova Science Publishers
External links
National Anemia Action Council at anemia.org
New hemoglobin type causes mock diagnosis with pulse oxymeters at www.life-of-science.net
Animation of hemoglobin: from deoxy to oxy form at vimeo.com
Hemoglobins
Equilibrium chemistry
Respiratory physiology | Hemoglobin | Chemistry | 11,313 |
241,223 | https://en.wikipedia.org/wiki/Poisson%27s%20ratio | In materials science and solid mechanics, Poisson's ratio (symbol: (nu)) is a measure of the Poisson effect, the deformation (expansion or contraction) of a material in directions perpendicular to the specific direction of loading. The value of Poisson's ratio is the negative of the ratio of transverse strain to axial strain. For small values of these changes, is the amount of transversal elongation divided by the amount of axial compression. Most materials have Poisson's ratio values ranging between 0.0 and 0.5. For soft materials, such as rubber, where the bulk modulus is much higher than the shear modulus, Poisson's ratio is near 0.5. For open-cell polymer foams, Poisson's ratio is near zero, since the cells tend to collapse in compression. Many typical solids have Poisson's ratios in the range of 0.2 to 0.3. The ratio is named after the French mathematician and physicist Siméon Poisson.
Origin
Poisson's ratio is a measure of the Poisson effect, the phenomenon in which a material tends to expand in directions perpendicular to the direction of compression. Conversely, if the material is stretched rather than compressed, it usually tends to contract in the directions transverse to the direction of stretching. It is a common observation when a rubber band is stretched, it becomes noticeably thinner. Again, the Poisson ratio will be the ratio of relative contraction to relative expansion and will have the same value as above. In certain rare cases, a material will actually shrink in the transverse direction when compressed (or expand when stretched) which will yield a negative value of the Poisson ratio.
The Poisson's ratio of a stable, isotropic, linear elastic material must be between −1.0 and +0.5 because of the requirement for Young's modulus, the shear modulus and bulk modulus to have positive values. Most materials have Poisson's ratio values ranging between 0.0 and 0.5. A perfectly incompressible isotropic material deformed elastically at small strains would have a Poisson's ratio of exactly 0.5. Most steels and rigid polymers when used within their design limits (before yield) exhibit values of about 0.3, increasing to 0.5 for post-yield deformation which occurs largely at constant volume. Rubber has a Poisson ratio of nearly 0.5. Cork's Poisson ratio is close to 0, showing very little lateral expansion when compressed and glass is between 0.18 and 0.30. Some materials, e.g. some polymer foams, origami folds, and certain cells can exhibit negative Poisson's ratio, and are referred to as auxetic materials. If these auxetic materials are stretched in one direction, they become thicker in the perpendicular direction. In contrast, some anisotropic materials, such as carbon nanotubes, zigzag-based folded sheet materials, and honeycomb auxetic metamaterials to name a few, can exhibit one or more Poisson's ratios above 0.5 in certain directions.
Assuming that the material is stretched or compressed in only one direction (the axis in the diagram below):
where
is the resulting Poisson's ratio,
is transverse strain
is axial strain
and positive strain indicates extension and negative strain indicates contraction.
Poisson's ratio from geometry changes
Length change
For a cube stretched in the -direction (see Figure 1) with a length increase of in the -direction, and a length decrease of in the - and -directions, the infinitesimal diagonal strains are given by
If Poisson's ratio is constant through deformation, integrating these expressions and using the definition of Poisson's ratio gives
Solving and exponentiating, the relationship between and is then
For very small values of and , the first-order approximation yields:
Volumetric change
The relative change of volume of a cube due to the stretch of the material can now be calculated. Since and
one can derive
Using the above derived relationship between and :
and for very small values of and , the first-order approximation yields:
For isotropic materials we can use Lamé's relation
where is bulk modulus and is Young's modulus.
Width change
If a rod with diameter (or width, or thickness) and length is subject to tension so that its length will change by then its diameter will change by:
The above formula is true only in the case of small deformations; if deformations are large then the following (more precise) formula can be used:
where
is original diameter
is rod diameter change
is Poisson's ratio
is original length, before stretch
is the change of length.
The value is negative because it decreases with increase of length
Characteristic materials
Isotropic
For a linear isotropic material subjected only to compressive (i.e. normal) forces, the deformation of a material in the direction of one axis will produce a deformation of the material along the other axis in three dimensions. Thus it is possible to generalize Hooke's Law (for compressive forces) into three dimensions:
where:
, , and are strain in the direction of , and
, , and are stress in the direction of , and
is Young's modulus (the same in all directions for isotropic materials)
is Poisson's ratio (the same in all directions for isotropic materials)
these equations can be all synthesized in the following:
In the most general case, also shear stresses will hold as well as normal stresses, and the full generalization of Hooke's law is given by:
where is the Kronecker delta. The Einstein notation is usually adopted:
to write the equation simply as:
Anisotropic
For anisotropic materials, the Poisson ratio depends on the direction of extension and transverse deformation
Here is Poisson's ratio, is Young's modulus, is a unit vector directed along the direction of extension, is a unit vector directed perpendicular to the direction of extension. Poisson's ratio has a different number of special directions depending on the type of anisotropy.
Orthotropic
Orthotropic materials have three mutually perpendicular planes of symmetry in their material properties. An example is wood, which is most stiff (and strong) along the grain, and less so in the other directions.
Then Hooke's law can be expressed in matrix form as
where
is the Young's modulus along axis
is the shear modulus in direction on the plane whose normal is in direction
is the Poisson ratio that corresponds to a contraction in direction when an extension is applied in direction .
The Poisson ratio of an orthotropic material is different in each direction (, and ). However, the symmetry of the stress and strain tensors implies that not all the six Poisson's ratios in the equation are independent. There are only nine independent material properties: three elastic moduli, three shear moduli, and three Poisson's ratios. The remaining three Poisson's ratios can be obtained from the relations
From the above relations we can see that if then . The larger ratio (in this case ) is called the major Poisson ratio while the smaller one (in this case ) is called the minor Poisson ratio. We can find similar relations between the other Poisson ratios.
Transversely isotropic
Transversely isotropic materials have a plane of isotropy in which the elastic properties are isotropic. If we assume that this plane of isotropy is the -plane, then Hooke's law takes the form
where we have used the -plane of isotropy to reduce the number of constants, that is,
.
The symmetry of the stress and strain tensors implies that
This leaves us with six independent constants , , , , , . However, transverse isotropy gives rise to a further constraint between and , which is
Therefore, there are five independent elastic material properties two of which are Poisson's ratios. For the assumed plane of symmetry, the larger of and is the major Poisson ratio. The other major and minor Poisson ratios are equal.
Poisson's ratio values for different materials
{| class="wikitable sortable" style="border-collapse: collapse"
|- bgcolor="#cccccc"
! Material
! Poisson's ratio
|-
| rubber
| 0.4999
|-
| gold
| 0.42–0.44
|-
| saturated clay
| 0.40–0.49
|-
| magnesium
| 0.252–0.289
|-
| titanium
| 0.265–0.34
|-
| copper
| 0.33
|-
| aluminium alloy
| 0.32
|-
| clay
| 0.30–0.45
|-
| stainless steel
| 0.30–0.31
|-
| steel
| 0.27–0.30
|-
| cast iron
| 0.21–0.26
|-
| sand
| 0.20–0.455
|-
| concrete
| 0.1–0.2
|-
| glass
| 0.18–0.3
|-
| metallic glasses
| 0.276–0.409
|-
| foam
| 0.10–0.50
|-
| cork
| 0.0
|}
{| class="wikitable sortable" style="border-collapse: collapse"
|- bgcolor="#cccccc"
!Material!!Plane of symmetry!!!!!!!!!!!!
|-
| Nomex honeycomb core
| , ribbon in direction
|0.49
|0.69
|0.01
|2.75
|3.88
|0.01
|-
|glass fiber epoxy resin
|
|0.29
|0.32
|0.06
|0.06
|0.32
|}
Negative Poisson's ratio materials
Some materials known as auxetic materials display a negative Poisson's ratio. When subjected to positive strain in a longitudinal axis, the transverse strain in the material will actually be positive (i.e. it would increase the cross sectional area). For these materials, it is usually due to uniquely oriented, hinged molecular bonds. In order for these bonds to stretch in the longitudinal direction, the hinges must ‘open’ in the transverse direction, effectively exhibiting a positive strain.
This can also be done in a structured way and lead to new aspects in material design as for mechanical metamaterials.
Studies have shown that certain solid wood types display negative Poisson's ratio exclusively during a compression creep test. Initially, the compression creep test shows positive Poisson's ratios, but gradually decreases until it reaches negative values. Consequently, this also shows that Poisson's ratio for wood is time-dependent during constant loading, meaning that the strain in the axial and transverse direction do not increase in the same rate.
Media with engineered microstructure may exhibit negative Poisson's ratio. In a simple case auxeticity is obtained removing material and creating a periodic porous media. Lattices can reach lower values of Poisson's ratio, which can be indefinitely close to the limiting value −1 in the isotropic case.
More than three hundred crystalline materials have negative Poisson's ratio. For example, Li, Na, K, Cu, Rb, Ag, Fe, Ni, Co, Cs, Au, Be, Ca, Zn Sr, Sb, MoS2 and others.
Poisson function
At finite strains, the relationship between the transverse and axial strains and is typically not well described by the Poisson ratio. In fact, the Poisson ratio is often considered a function of the applied strain in the large strain regime. In such instances, the Poisson ratio is replaced by the Poisson function, for which there are several competing definitions. Defining the transverse stretch and axial stretch , where the transverse stretch is a function of the axial stretch, the most common are the Hencky, Biot, Green, and Almansi functions:
Applications of Poisson's effect
One area in which Poisson's effect has a considerable influence is in pressurized pipe flow. When the air or liquid inside a pipe is highly pressurized it exerts a uniform force on the inside of the pipe, resulting in a hoop stress within the pipe material. Due to Poisson's effect, this hoop stress will cause the pipe to increase in diameter and slightly decrease in length. The decrease in length, in particular, can have a noticeable effect upon the pipe joints, as the effect will accumulate for each section of pipe joined in series. A restrained joint may be pulled apart or otherwise prone to failure.
Another area of application for Poisson's effect is in the realm of structural geology. Rocks, like most materials, are subject to Poisson's effect while under stress. In a geological timescale, excessive erosion or sedimentation of Earth's crust can either create or remove large vertical stresses upon the underlying rock. This rock will expand or contract in the vertical direction as a direct result of the applied stress, and it will also deform in the horizontal direction as a result of Poisson's effect. This change in strain in the horizontal direction can affect or form joints and dormant stresses in the rock.
Although cork was historically chosen to seal wine bottle for other reasons (including its inert nature, impermeability, flexibility, sealing ability, and resilience), cork's Poisson's ratio of zero provides another advantage. As the cork is inserted into the bottle, the upper part which is not yet inserted does not expand in diameter as it is compressed axially. The force needed to insert a cork into a bottle arises only from the friction between the cork and the bottle due to the radial compression of the cork. If the stopper were made of rubber, for example, (with a Poisson's ratio of about +0.5), there would be a relatively large additional force required to overcome the radial expansion of the upper part of the rubber stopper.
Most car mechanics are aware that it is hard to pull a rubber hose (such as a coolant hose) off a metal pipe stub, as the tension of pulling causes the diameter of the hose to shrink, gripping the stub tightly. (This is the same effect as shown in a Chinese finger trap.) Hoses can more easily be pushed off stubs instead using a wide flat blade.
See also
Linear elasticity
Hooke's law
Impulse excitation technique
Orthotropic material
Shear modulus
Young's modulus
Coefficient of thermal expansion
References
External links
Meaning of Poisson's ratio
Negative Poisson's ratio materials
More on negative Poisson's ratio materials (auxetic)
Elasticity (physics)
Mechanical quantities
Dimensionless numbers of physics
Materials science
Ratios
Solid mechanics | Poisson's ratio | Physics,Materials_science,Mathematics,Engineering | 3,072 |
78,889,296 | https://en.wikipedia.org/wiki/Chuwi | CHUWI (馳為創新科技(深圳)有限公司) is an electronics manufacturer headquartered in Shenzhen, China. The company primarily produces laptops, tablet computers, and mini PCs.
History
CHUWI was established in September 2004 in Shenzhen, China. The company began its operations by offering MP4 products, laying the foundation for its future growth in the electronics industry.
In 2010, CHUWI formed strategic partnerships with MediaTek, Huawei, and Google. These collaborations played a crucial role in expanding the company's technological capabilities and market reach.
By 2013, CHUWI had established business relationships with Microsoft and Intel, further solidifying its position in the industry. During this time, the company also began recruiting sales agents to strengthen its distribution network.
In May 2015, CHUWI became a sponsor of the 2015 China Table Tennis Super League. This move helped enhance the company's brand visibility and reputation in the competitive electronics market.
Later in September 2015, CHUWI took significant steps to expand into overseas markets. The company established dedicated teams for Amazon, AliExpress, and eBay, and opened a warehouse in the United States to support its international operations.
Products
Laptops
LapBook Plus
CoreBook
FreeBook - 360 Touchscreen
GemiBook
MiniBook
AeroBook
Tablets
Hi10 X
Ubook Pro 8100Y
Ubook Pro N4100
Mini PCs
CoreBox
HeroBox
GBox Pro
HiGame
RZBox
Controversies
Some PCs lack Japanese regulatory certification, prompting administrative guidance
On April 12, 2023, Japan's Ministry of Internal Affairs and Communications issued administrative guidance to CHUWI Innovation Technology Co., Ltd., which handles laptops and tablets. It was discovered that some products sold under the CHUWI brand did not have the required Technical Conformity Certification (TCC) for the 5 GHz band.
The affected models included the 2017 "Hi13 (CWI534)," 2019 "UBook (CWI509)," "UBook Pro (CWI535)," "MiniBook (CWI526)," and 2020 "Hi10 X (CWI529)." These products were sold with misleading compliance labels. CHUWI plans to address the issue through a software update and has advised users to use only the 2.4 GHz band until the update is applied.
On April 14, CHUWI issued an apology for the lack of 5 GHz band certification in some of its laptops and 2-in-1 detachable devices. The company stated that it had been advised by a certification provider that "5GHz band channel certification could be inherited" and that "only 2.4GHz band certification was necessary," leading to the oversight.
CHUWI has begun the certification process for the affected products and expects to complete it by April 30. The company has expressed deep regret and pledged to prevent similar issues in the future.
Frequent failures of educational tablets
On October 4, 2023, the Tokushima Prefectural Board of Education revealed that nearly 20% of the 16,500 tablets provided to high schools as part of the "one device per student" initiative had become unusable due to issues such as battery swelling. These tablets were manufactured by CHUWI.
As of October 26, no repair timeline had been established, and students were sharing devices or using personal ones. The number of failures continued to rise sharply, reaching 4,834 by December 11.
In March 2024, the prefectural education director resigned to take responsibility for the issue.
References
External links
Computer hardware companies
Companies based in Shenzhen
Companies established in 2004 | Chuwi | Technology | 726 |
78,251,560 | https://en.wikipedia.org/wiki/Arrhenius%20Plaque | The Arrhenius Plaque (Swedish: Arrhenius-plaketten) is awarded annually by the Swedish Chemical Society in memory of Svante Arrhenius, a Swedish physicist, chemist, and long-time member of the society, "to a person or persons who have distinguished themselves through outstanding research in the field of chemistry or who have performed valuable work for the good of the Swedish Chemical Society".
Past recipients include Ragnar Ryhage (1962), Jerker Porath and Per Flodin (1963), Carl-Ivar Brändén (1976), Svante Wold (1984), Gunnar von Heijne (1997), Per Claesson (2008), Jonas Bergquist (2009), Lisbeth Olsson (2018) and Berit Olofsson (2021)
References
Chemistry awards
Science and technology awards
Swedish science and technology awards | Arrhenius Plaque | Technology | 183 |
295,446 | https://en.wikipedia.org/wiki/Kumis | Kumis ( , ), alternatively spelled coumis or kumyz, also known as airag ( ), is a traditional fermented dairy product made from mare milk. The drink is important to the peoples of the Central and East Asian steppes, of Turkic and Mongolic origin: Kazakhs, Bashkirs, Kalmyks, Kyrgyz, Mongols, and Yakuts. Kumis was historically consumed by the Khitans, Jurchens, Hungarians, and Han Chinese of North China as well.
Kumis is a dairy product similar to kefir, but is produced from a liquid starter culture, in contrast to the solid kefir "grains". Because mare's milk contains more sugars than cow's or goat's milk, when fermented, kumis has a higher, though still mild, alcohol content compared to kefir.
Even in the areas of the world where kumis is popular today, mare's milk remains a very limited commodity. Industrial-scale production, therefore, generally uses cow's milk, which is richer in fat and protein, but lower in lactose than the milk from a horse. Before fermentation, the cow's milk is fortified in one of several ways. Sucrose may be added to allow a comparable fermentation. Another technique adds modified whey to better approximate the composition of mare's milk.
Terminology and etymology
Kumis comes from the Turkic word kumïŕ. Gerard Clauson notes that kımız is found throughout the Turkic language family and cites the 11th-century appearance of the word in Dīwān Lughāt al-Turk written by Mahmud al-Kashgari in the Karakhanid language.
In Mongolia, the drink is called airag () or, in some areas, tsegee. William of Rubruck, in his 13th-century travels, calls the drink cosmos and describes its preparation among the Mongols.
Production of mare milk
A 1982 source reported 230,000 mares were kept in the Soviet Union specifically for producing milk to make into kumis. Rinchingiin Indra, writing about Mongolian dairying, says "it takes considerable skill to milk a mare" and describes the technique: the milker kneels on one knee, with a pail propped on the other, steadied by a string tied to an arm. One arm is wrapped behind the mare's rear leg and the other in front. A foal starts the milk flow and is pulled away by another person, but left touching the mare's side during the entire process.
In Mongolia, the milking season for horses traditionally runs between mid-June and early October. During one season, a mare produces approximately 1,000 to 1,200 litres of milk, of which about half is left to her foal.
Production
Kumis is made by fermenting raw milk (that is, unpasteurized) over the course of hours or days, often while stirring or churning. (The physical agitation has similarities to making butter.) During the fermentation, lactobacilli bacteria acidify the milk, and yeasts turn it into a carbonated and mildly alcoholic drink.
Traditionally, this fermentation took place in horse-hide containers, which might be left on the top of a yurt and turned over on occasion, or strapped to a saddle and joggled around over the course of a day's riding. Today, a wooden vat or plastic barrel may be used in place of the leather container. In modern, controlled production, the initial fermentation takes two to five hours, at a temperature of around ; this may be followed by a cooler aging period.
Kumis itself has a very low level of alcohol, between 0.7 and 2.5%, comparable to small beer, the common drink of medieval Europe that also helps to avoid the consumption of potentially contaminated water. Kumis can, however, be strengthened through freeze distillation, a technique Central Asian nomads are reported to have employed. It can also be made into the distilled beverage known as araka or arkhi.
History
Archaeological investigations of the Botai culture of ancient Kazakhstan have revealed traces of milk in bowls from the site of Botai, suggesting the domestication of dairy animals. No specific evidence for its fermentation has yet been found, but considering the location of the Botai culture and the nutritional properties of mare's milk, the possibility is high.
Kumis is an ancient beverage. Herodotus, in his 5th-century BC Histories, describes the Scythians processing of mare's milk:
Now the Scythians blind all their slaves, to use them in preparing their milk. The plan they follow is to thrust tubes made of bone, not unlike our musical pipes, up the vulva of the mare, and then to blow into the tubes with their mouths, some milking while the others blow. They say that they do this because when the veins of the animal are full of air, the udder is forced down. The milk thus obtained is poured into deep wooden casks, about which the blind slaves are placed, and then the milk is stirred round. That which rises to the top is drawn off, and considered the best part; the under portion is of less account.
This is widely believed to be the first description of ancient kumis-making. Apart from the idiosyncratic method of mare-milking, it matches up well enough with later accounts, such as this one given by 13th-century traveller William of Rubruck:
This cosmos, which is mare's milk, is made in this wise. […] When they have got together a great quantity of milk, which is as sweet as cow's as long as it is fresh, they pour it into a big skin or bottle, and they set to churning it with a stick […] and when they have beaten it sharply it begins to boil up like new wine and to sour or ferment, and they continue to churn it until they have extracted the butter. Then they taste it, and when it is mildly pungent, they drink it. It is pungent on the tongue like rapé wine when drunk, and when a man has finished drinking, it leaves a taste of milk of almonds on the tongue, and it makes the inner man most joyful and also intoxicates weak heads, and greatly provokes urine.
Rubruk also mentions that the Mongols prized a variety of kumis he calls caracomos ("black comos"), which was reserved for "great lords".
In the 19th century, "kumyss" was used to treat gastrointestinal disorders.
Consumption
Strictly speaking, kumis is in its own category of alcoholic drinks, because it is made neither from fruit nor from grain. Technically, it is closer to wine than to beer, because the fermentation occurs directly from sugars (wine is usually fermented directly from fruit, whereas beer relies on starches, usually from grain, which convert to sugars by mashing). In terms of experience and traditional manner of consumption, however, it is much more comparable to beer and is even milder in alcoholic content than beer. It is arguably the region's beer equivalent.
Kumis is very light in body compared to most dairy drinks. It has a unique, slightly sour flavor with a bite from the mild alcoholic content. The exact flavor is greatly variable between different producers.
Kumis is usually served cold or chilled. Traditionally it is sipped out of small, handle-less, bowl-shaped cups or saucers, called piyala. The serving of it is an essential part of Kyrgyz hospitality on the jayloo or high pasture, where they keep their herds of animals (horse, cattle, and sheep) during the summer phase of transhumance.
Cultural role
During the Yuan dynasty of China, kumis was essentially made to be the replacement of tea. Furthermore, Möngke Khan, the fourth Great Khan of the Mongol Empire, had a drinking fountain made in his capital of Karakorum, including kumis alongside Chinese rice wine, Scandinavian mead, and Persian grape wine as a symbol of the empire's diversity and size.
Bishkek, the capital of Kyrgyzstan, is supposedly named after the paddle used to churn the fermenting milk.
The famous Russian writer Leo Tolstoy in A Confession spoke of running away from his troubled life by drinking kumis.
The Russian composer Alexander Scriabin was recommended a kumis diet and "water cure" by his doctor in his twenties, for his nervous condition and right-hand injury.
The Japanese soft drink Calpis models its flavor after the taste of kumis.
See also
Ayran
Blaand
Cacık
Calpis
Chal
Doogh
Mattha
Chaas
Laban
Lassi
Suutei tsai
Tarasun
List of ancient dishes and foods
List of dairy products
Notes
References
External links
Fermented dairy products
Fermented drinks
Horse products
Kazakh cuisine
Russian cuisine
Buryat cuisine
Tuvan cuisine
Kalmyk cuisine
Altai cuisine
Yakut cuisine
Bashkir cuisine
Tatar cuisine
Soviet cuisine
Kyrgyz cuisine
Ancient dishes
Mongolian alcoholic drinks | Kumis | Biology | 1,899 |
19,092,009 | https://en.wikipedia.org/wiki/Duo%20LNB | A Duo LNB is a double low-noise block downconverter (LNB) developed by SES for the simultaneous reception of satellite television signals from both the Astra 23.5°E and Astra 19.2°E satellite positions.
It is a monoblock LNB, which comprises two feedhorns with a single body of electronics containing the LNB stages along with switching circuitry to select which received signal is passed to the output(s). The Duo LNB uses linear polarisation.
Availability
A Duo LNB can be purchased in most parts of Europe but it is particularly marketed to Germany, the Netherlands, Belgium, Czechia and Slovakia.
Duo LNBs operate as universal LNBs and are manufactured under various brand names, such as Maximum and Inverto, in single, twin-output and quad-output versions – with one, two and four outputs (independently selectable for polarisation and frequency band), respectively, for one, two or four receivers/tuners.
The Duo LNB is available in two versions - the original Duo LNB for dishes of 80 cm or 85 cm diameter and the Duo LNB II for dishes of 60 cm.
Background
The Astra 23.5°E orbital position was established as a major source of direct-to-home (DTH) broadcasts for central and western Europe with the launch of Astra 3A at the end of 2007, and some channels moved there from other satellite positions (in particular 19.2° east) so viewers, who were unable to erect two dishes to receive transmissions from both positions, had to choose between them.
In particular, the Czech CS Link and Slovak SkyLink networks moved to Astra 23.5°E, and the Dutch Canal Digitaal launched a new thematic bouquet at 23.5° east in October 2007. The Dutch regional broadcasters all moved to Astra 23.5°E in September 2007, to be lost to viewers without access to the new satellite position.
The Duo LNB was introduced to enable a single satellite dish to be used to receive all the channels from 19.2° east and 23.5° east.
The ASTRA2Connect satellite internet service also operates from 23.5° east.
In May 2010 the Astra 3B satellite was launched to the Astra 23.5° east position to release the Astra 1E and Astra 1G satellites previously in that position for use at other orbital positions. The launch had been much postponed due to technical problems with the Ariane 5 launch rocket. In February 2011, Bulgarian DTH operator Satellite BG launched a package of more than 60 standard definition channels and 12 high definition channels using three transponders on Astra 3B, further increasing the appeal for viewers to receive both satellite positions.
Technology
The basic technology behind the Duo LNB is not new. It takes advantage of the fact that signals hitting a dish off-axis will be focused (albeit with some diffusion) off axis in the opposite direction. So, with the dish aligned so that the central LNB is receiving one satellite, a secondary offset LNB can be aligned on the focus of a second satellite spaced away from the first.
This effect has been exploited for many years to receive signals from two satellites at once with a single dish, and two LNBs have been most commonly arranged on a dish in this way for reception of Astra 19.2°E and the Hot Bird satellites at 13° east, primarily for the abundance of TV channels from 19.2° east, and some additional channels (especially adult channels) from 13° east.
A monoblock LNB provides a convenient alternative to fixing and aligning two LNBs to a dish independently. The two feedhorns are positioned at the correct spacing for reception from the two satellites required and the DiSEqC switching system is used to select between the signals from the two satellites with commands from the connected receiver. In other respects, the monoblock LNB acts as a normal LNB to the connected receiver.
The required separation of the monoblock's feedhorns depends on the angular separation of the satellites to be received, the position of the receive site on the Earth's surface and the focal length of the dish. Fortunately, monoblock LNBs can be standardised for sites across Europe provided that a "standard" offset dish with a focal length/diameter (f/D) ratio of 0.6 is used.
Monoblock LNBs for 19.2° east and 13° east have been widely available for several years (indeed, the DiSEqC switching system was originally designed for just this setup). However, these do not function correctly for Astra 23.5°E and Astra19.2°E because these satellites are at a different angular separation.
In fact, it can be difficult to physically fit two separate LNBs onto a dish at the correct separation for Astra 23.5°E and Astra 19.2°E because their bulk may prevent the feedhorns sitting close enough together.
The Duo LNB is carefully designed with the correct spacing of the feedhorns, DiSEqC level 1.0 switching between the satellites and a low noise amplifier and conversion system.
Installation
The Duo LNB is designed to be fitted with the feedhorn for Astra 23.5°E mounted on the dish's feedarm, and the 19.2°E feedhorn sticking out to the right - as viewed standing in front of the dish, with the satellites behind you. The Astra 23.5°E feedhorn is identified with a "23.5" marking on the casing. The dish is then aligned on the 23.5°E satellite position, using a signal strength meter, in the normal way.
The Duo LNB is rotated in the feed clamp to a certain tilt angle to provide both the correct 'skew' angle for the feedhorns to align with the incoming signals, and the necessary height difference between the feedhorns to accommodate the different elevations of the two satellite positions. The correct skew angle and height difference depend on the position of the receive site on Earth's surface, and in most locations the tilt angle from the LNB is a compromise between their ideal settings. However, within Europe the single tilt angle adjustment provides sufficient accuracy for both settings for reliable reception.
The tilt angle for the Duo LNB at the receive site location may be found in maps or city tables (a scale is marked on the LNB's 23.5°E feedhorn casing) or found by adjustment with a signal meter connected.
By setting the correct tilt angle and aligning the whole dish in azimuth and elevation, the two feedhorns of the LNB are optimally aligned for both orbital positions.
Name Confusion
The Duo LNB is a monoblock type LNB designed for accessing two satellite positions with a single dish and it should not be confused with a "dual LNB", which is the common (US) name for an LNB with a single feedhorn but two separate outputs.
A double LNB called just a "Monoblock" will usually be for reception of 19.2° east and 13° east, and not a Duo LNB suitable for Astra 23.5°E and Astra 19.2°E.
See also
SES satellite operator
Astra satellite family
Astra 23.5°E one satellite position received
Astra 19.2°E second satellite position received
Astra 3A satellite
Astra 3B satellite
Monoblock LNB
ASTRA2Connect satellite Internet service at 23.5° east
References
External links
SES fleet information and map
Official SES site
Antennas | Duo LNB | Engineering | 1,580 |
9,510,615 | https://en.wikipedia.org/wiki/Focal%20infection%20theory | Focal infection theory is the historical concept that many chronic diseases, including systemic and common ones, are caused by focal infections. In present medical consensus, a focal infection is a localized infection, often asymptomatic, that causes disease elsewhere in the host, but focal infections are fairly infrequent and limited to fairly uncommon diseases. (Distant injury is focal infection's key principle, whereas in ordinary infectious disease, the infection itself is systemic, as in measles, or the initially infected site is readily identifiable and invasion progresses contiguously, as in gangrene.) Focal infection theory, rather, so explained virtually all diseases, including arthritis, atherosclerosis, cancer, and mental illnesses.
An ancient concept that took modern form around 1900, focal infection theory was widely accepted in medicine by the 1920s. In the theory, the focus of infection might lead to secondary infections at sites particularly susceptible to such microbial species or toxin. Commonly alleged foci were diverse—appendix, urinary bladder, gall bladder, kidney, liver, prostate, and nasal sinuses—but most commonly were oral. Besides dental decay and infected tonsils, both dental restorations and especially endodontically treated teeth were blamed as foci. The putative oral sepsis was countered by tonsillectomies and tooth extractions, including of endodontically treated teeth and even of apparently healthy teeth, newly popular approaches—sometimes leaving individuals toothless—to treat or prevent diverse diseases.
Drawing severe criticism in the 1930s, focal infection theory—whose popularity zealously exceeded consensus evidence—was discredited in the 1940s by research attacks that drew overwhelming consensus of this sweeping theory's falsity. Thereupon, dental restorations and endodontic therapy became again favored. Untreated endodontic disease retained mainstream recognition as fostering systemic disease. But only alternative medicine and later biological dentistry continued highlighting sites of dental treatment—still endodontic therapy, but, more recently, also dental implant, and even tooth extraction, too—as foci of infection causing chronic and systemic diseases. In mainstream dentistry and medicine, the primary recognition of focal infection is endocarditis, if oral bacteria enter blood and infect the heart, perhaps its valves.
Entering the 21st century, scientific evidence supporting general relevance of focal infections remained slim, yet evolved understandings of disease mechanisms had established a third possible mechanism—altogether, metastasis of infection, metastatic toxic injury, and, as recently revealed, metastatic immunologic injury—that might occur simultaneously and even interact. Meanwhile, focal infection theory has gained renewed attention, as dental infections apparently are widespread and significant contributors to systemic diseases, although mainstream attention is on ordinary periodontal disease, not on hypotheses of stealth infections via dental treatment. Despite some doubts renewed in the 1990s by conventional dentistry's critics, dentistry scholars maintain that endodontic therapy can be performed without creating focal infections.
Rise and popularity (1890s–1930s)
Roots and dawn
Germ theory
Hippocrates, in ancient Greece, had reported cure of an arthritis case by tooth extraction. Yet focal infection, as such, appeared in modern medicine in 1877, when Karl Weigert reported "dissemination of 'tuberculosis poison' ". The prior year's breakthrough by Robert Koch, a fellow German, had launched medical bacteriology—a set of laboratory methods to isolate, culture, and multiply a single bacterium of one species—whereby Koch announced discovery of the "tubercle bacillus" in 1882, fully premising the modern principle of focal infection. In 1884, William Henry Welch, tasked to design the medical department at the newly forming Johns Hopkins University, imported the German model, "scientific medince", to America.
As progressively more diseases drew an infectious hypothesis that led to a pathogen discovery, conjectures grew that virtually all diseases are infectious. In 1890, German dentist Willoughby D Miller attributed a set of oral diseases to infections, and attributed a set of extraoral diseases—as of lung, stomach, brain abscesses, and other conditions—to the oral infections. In 1894, Miller became the first to identify bacteria in samples of tooth pulp. Miller advised root canal therapy. Yet ancient and folk concepts, entrenched as Galenic principles of humoral medicine, found new outlet in medical bacteriology, a pillar of the new "scientific medicine". Around 1900, British surgeons, still knife-happy, were urging "surgical bacteriology".
Autointoxication
In 1877, French chemist Louis Pasteur adopted Robert Koch's bacteriology protocols, but soon directed them to developing the first modern vaccines, and ultimately introduced rabies vaccine in 1885. Its success funded Pasteur's formation of the globe's first biomedical research institute, the Pasteur Institute. In 1886, Pasteur welcomed to Paris the emigration from Russia by international scientific celebrity Elie Metchnikoff—discoverer of phagocytes, mediating innate immunity—whom Pasteur granted an entire floor of the Pasteur Institute, once it opened in 1888. Later the institute's director and a 1908 Nobelist, Metchnikoff believed, as did his German immunology rival Paul Ehrlich—theorist on antibody, mediating acquired immunity—and as did Pasteur, too, that nutrition influences immunity. Metchnikoff brought to France its first yogurt cultures for probiotic microorganisms to suppress the colon's putrefactive microorganisms, which allegedly fostered the colon's toxic seepage causing degenerative disease, the putative phenomenon termed autointoxication. Metchnikoff reasoned that the colon functions as a "vesitigal cesspool" that stores waste but is unneeded.
Abdominal surgery's pioneer, Sir Arbuthnot Lane, based in London, drew from Metchnikoff and clinical observation to identify "chronic intestinal stasis"—in lay terms, intractable constipation—presumably, "flooding of the circulation with filthy material". Reporting surgical treatment in 1908, Lane eventually offered total colon removal, but later favored simply surgical release of colonic "kinks", and in 1925, abandoning surgery, began promoting prevention and intervention by diet and lifestyle, how Lane secured his contemporary reputation as a crank. Since 1875, in the American state Michigan, physician John Harvey Kellogg had targeted "bowel sepsis"—an allegedly prime cause of degeneration and disease—at his health resort, Battle Creek Sanitarium. Having, in fact, coined the term sanitarium, Kellogg yearly received several thousand patients, including US Presidents and celebrities, at his huge resort, advertised as the "University of Health". But in the 1910s, as North American medical schools emulated the German model—that is, "scientific medicine"—medical doctors who recognized "focal infection" were hinting a scientific basis versus the older, alleged "health faddists" like medical doctor Kellogg and like minister Sylvester Graham.
Medical popularity
Hunter on "oral sepsis"
In 1900, British surgeon William Hunter blamed many disease cases on oral sepsis. In 1910, lecturing in Montreal at McGill University, Hunter declared, "The worst cases of anemia, gastritis, colitis, obscure fevers, nervous disturbances of all kinds from mental depression to actual lesions of the cord, chronic rheumatic infections, kidney diseases are those which owe their origin to or are gravely complicated by the oral sepsis produced by these gold traps of sepsis." Thus, he apparently indicted dental restorations. Incriminating their execution, rather, his American critics lobbied for stricter requirements on dentistry licensing. Still, Hunter's lecture—as later recalled—"ignited the fires of focal infection". Ten years later, he proudly accepted that credit. And yet, read carefully, his lecture asserts a sole cause of oral sepsis: dentists who instruct patients to never remove partial dentures.
Billings & Rosenow
Focal infection theory's modern era really began with physician Frank Billings, based in Chicago, and his case reports of tonsillectomies and tooth extractions that apparently cured infections of distant organs. Replacing Hunter's term oral sepsis with focal infection, Billings in November 1911 lectured at the Chicago Medical Society, and published it in 1912 as an article for the American medical community. In 1916, Billings lectured in California at Stanford University Medical School, this time printed in book format. Billings thus popularized intervention by tonsillectomy and tooth extraction. A pupil of Billings, Edward Rosenow held that extraction alone was often insufficient, and urged teamwork by dentistry and medicine. Rosenow developed the principle elective localization, whereby microorganisms have affinities for particular organs, and also espoused extreme pleomorphism, whereby a bacterium can drastically change form and perhaps evade conventional detection methods.
Preeminent recognition
Since 1889, in the American state Minnesota, brothers William Mayo and Charles Mayo had built an international reputation for surgical skill at their Mayo Clinic, by 1906 performing some 5,000 surgeries a year, over 50% intra-abdominal, a tremendous number at the time, with unusually low mortality and morbidity. Though originally distancing themselves from routine medicine and skeptical of laboratory data, they later recruited Edward Rosenow from Chicago to help improve Mayo Clinic's diagnosis and care and to enter basic research via experimental bacteriology. Rosenow influenced Charles Mayo, who by 1914 published to support focal infection theory alongside Rosenow.
At Johns Hopkins University's medical school, launched in 1894 as America's first to teach "scientific medicine", the eminent Sir William Osler was succeeded as professor of medicine by Llewellys Barker, who became a prominent proponent of focal infection theory. Although many of the Hopkins medical faculty remained skeptics, Barker's colleague William Thayer cast support. As Hopkins' chief physician, Barker was a pivotal convert propelling the theory to the center of American routine medical practice. Russell Cecil, famed author of Cecil's Essentials of Medicine, too, lent support. In 1921, British surgeon William Hunter announced that oral sepsis was "coming of age".
Although physicians had already interpreted pus within a bodily compartment as a systemic threat, pus from infected tooth roots often drained into the mouth and thereby was viewed as systemically inconsequential. Amid focal infection theory, it was concluded that that was often the case—while immune response prevented dissemination from the focus—but that immunity could fail to contain the infection, that dissemination from the focus could ensue, and that systemic disease, often neurological, could result. By 1930, excision of focal infections was considered a "rational form of therapy" undoubtedly resolving many cases of chronic diseases. Its inconsistent effectiveness was attributed to unrecognized foci—perhaps inside internal organs—that the clinicians had missed.
Dental reception
In 1923, upon some 25 years of researches, dentist Weston Andrew Price of Cleveland, Ohio, published a landmark book, then a related article in the Journal of the American Medical Association in 1925. Price concluded that after root canal therapy, teeth routinely host bacteria producing potent toxins. Transplanting the teeth into healthy rabbits, Price and his researchers duplicated heart and arthritic diseases. Although Price noted often seeing patients "suffering more from the inconvenience and difficulties of mastication and nourishment than they did from the lesions from which their physician or dentist had sought to give them relief", his 1925 debate with John P Buckley was decided in favor of Price's position: "practically all infected pulpless teeth should be extracted". As chairman of the American Dental Association's research division, Price was a leading influence on the dentistry profession's opinion. Into the late 1930s, textbook authors relied on Price's 1923 treatise.
In 1911, the year that Frank Billings lectured on focal infection to the Chicago Medical Society, unsuspected periapical disease was first revealed by dental X-ray. Introduced by C. Edmund Kells, dental radiography to feed the "mania of extracting devitalized teeth". Even Price was cited as an authoritative source espousing conservative intervention at focal infections. Kells, too, advocated conservative dentistry. Many dentists were "100 percenters", extracting every tooth exhibiting either necrotic pulp or endodontic treatment, and extracted apparently healthy teeth, too, as suspected foci, leaving many persons toothless. A 1926 report published by several authors in Dental Cosmos—a dentistry journal where Willoughby Miller had published in the 1890s—advocated extraction of known healthy teeth to prevent focal infection. Endodontics nearly vanished from American dental education. Some dentists held that root canal therapy should be criminalized and penalized with six months of hard labor.
Psychiatric promulgation
Near the turn of the 20th century, psychiatry's predominant explanations of schizophrenia's causation, besides heredity, were focal infection and autointoxication. In 1907, psychiatrist Henry Andrews Cotton became director of the psychiatric asylum at Trenton State Hospital in the American state New Jersey. Influenced by focal infection theory's medical popularity, Cotton identified focal infections as the main causes of dementia praecox (now schizophrenia) and of manic depression (now bipolar disorder). Cotton routinely prescribed surgery not only to clean the nasal sinuses and to extract the tonsils and the teeth, but also to remove the appendix, gall bladder, spleen, stomach, colon, cervix, ovaries, and testicles, while Cotton claimed up to 85% cure rate.
Despite Cotton's death rate of some 30%, his fame rapidly spread through America and Europe, and the asylum drew influx of patients. The New York Times heralded "high hope". Cotton made a European lecture tour, and Princeton University Press and Oxford University Press simultaneously published his book in 1922. Despite skepticism in the profession, psychiatrists sustained pressure to match Cotton's treatments, as patients would ask why they were being denied curative treatment. Other patients were pressured or compelled into the treatment without their own consent. Cotton had his two sons' teeth extracted as preventive healthcare—although each later committed suicide. In the 1930s, however, focal infection fell from psychiatry as an explanation, Cotton having died in 1933.
Criticism and decline (1930s–1950s)
Early skepticism
Addressing the Eastern Medical Society in December 1918, New York City physician Robert Morris had explained that focal infection theory had drawn much interest but that understanding was incomplete, while the theory was earning disrepute through overzealousness of some advocates. Morris called for facts and explanation from scientists before physicians continued investing so steeply in it, already triggering vigorous disputes and embittering divisions among clinicians as well as uncertainty among patients.
In 1919, the American Dental Association's forerunner, the National Dental Association, held in New Orleans its annual meeting, where C Edmund Kells, the originator and pioneer of dental X-ray, delivered a lecture, published in 1920 in the association's journal, largely discussing focal infection theory, which Kells condemned as a "crime". Kells stressed that X-ray technology is to improve dentistry, not to enhance the "mania of extracting devitalized teeth". Kells urged dentists to reject physicians' prescriptions of tooth extractions.
Focal infection theory's elegance suggested simple application, but the surgical removals brought meager "cure" rate, occasional disease worsening, and inconsistent experimental results. Still, the lack of controlled clinical trials, among present criticism, was normal at the time—except in New York City. Around 1920, at Henry Cotton's claims of up to 85% success treating schizophrenia and manic depression, Cotton's major critic was George Kirby, director of the New York State Psychiatric Institute on Ward's Island. As colleagues of Kirby, two researchers—bacteriologist Nicolas Kopeloff and psychiatrist Clarence Cheney—ventured from Ward's Island to Trenton, New Jersey, to investigate Cotton's practice.
Research attacks
In two controlled clinical trials with alternate allocation of patients, Nicolas Kopeloff, Clarence Cheney, and George Kirby concluded Cotton's psychiatric surgeries ineffective: those who improved were already so prognosed, and others improved without surgery. Publishing two papers, the team presented the findings at the American Psychiatric Association's 1922 and 1923 annual meetings. At Johns Hopkins University, Phyllis Greenacre questioned most of Cotton's data, and later helped steer American psychiatry into psychoanalysis. Antipsychotic colectomy vanished except in Trenton until Cotton—who used publicity and word of mouth, kept the 30% death rate unpublicized, and passed a 1925 investigation by New Jersey Senate—died by heart attack in 1933.
By 1927, Weston Price's researches had been criticized for allegedly "faulty bacterial technique". In the 1930s and 1940s, researchers and editors dismissed the studies of Price and of Edward Rosenow as flawed by insufficient controls, by massive doses of bacteria, and by contamination of endontically treated teeth during extraction. In 1938, Russell Cecil and D Murray Angevine reported 200 cases of rheumatoid arthritis, but no consistent cures by tonsillectomies or tooth extractions. They commented, "Focal infection is a splendid example of a plausible medical theory which is in danger of being converted by its enthusiastic supporters into the status of an accepted fact." Newly a critic, Cecil alleged that foci were "anything readily accessible to surgery".
In 1939, E W Fish published landmark findings that would revive endodontics. Fish implanted bacteria into guinea pigs' jaws, and reported that four zones of reaction consequently developed. Fish reported that the first zone was the zone of infection, whereas the other three zones—surrounding the zone of infection—revealed immune cells or other host cells but no bacteria. Fish theorized that by removing the infectious nidus, dentists would permit recovery from the infection This reasoning and conclusion by Fish became the basis for successful root-canal treatment. Still, endodontic therapy of the era indeed posed substantial risk of failure, and fear of focal infection crucially motivated endontologists to develop new and improved technology and techniques.
End of the focal era
The review and "critical appraisal" by Hobart A Reimann and W Paul Havens, published in January 1940, was perhaps the most influential criticism of focal infection theory. Recasting British surgeon William Hunter's landmark pronouncements of 30 years earlier as widely misinterpreted, they summarized that "the removal of infectious dental focal infections in the hope of influencing remote or general symptoms of disease must still be regarded as an experimental procedure not devoid of hazard". By 1940, Louis I Grossman's textbook Root Canal Therapy flatly rejected the methods and conclusions made earlier by Weston Price and especially by Edward Rosenow. Amid improvements in endodontics and medicine, including release of sulfa drugs and antibiotics, a backlash to the "orgy" of tooth extractions and tonsillectomies ensued.
K A Easlick's 1951 review in the Journal of the American Dental Association notes, "Many authorities who formerly felt that focal infection was an important etiologic factor in systemic disease have become skeptical and now recommend less radical procedures in the treatment of such disorders". A 1952 editorial in Journal of the American Medical Association tolled the era's end by stating that "many patients with diseases presumably caused by foci of infection have not been relieved of their symptoms by removal of the foci", that "many patients with these same systemic diseases have no evident focus of infection", and that "foci of infection are as common in apparently healthy persons as in those with disease". Although some support extended into the late 1950s, focal infection vanished as the primary explanation of chronic, systemic diseases, and the theory was generally abandoned in the 1950s.
Revival and evolution (1990s–2010s)
Despite the general theory's demise, focal infection remained a formal, if rare, diagnosis, as in idiopathic scrotal gangrene and angioneurotic edema. Meanwhile, by way of continuing case reports claiming cures of chronic diseases like arthritis after extraction of infected or root-filled teeth, and despite lack of scientific evidence, "dental focal infection theory never died". In fact, severe endodontic disease resembles classic focal infection theory. In 1986, it was noted that, "in spite of a decline in recognition of the focal-infection theory, the association of decayed teeth with systemic disease is taken very seriously". Eventually, the theory of focal infection drew reconsideration. Conversely, attribution of endocarditis to dentistry has entered doubt via case-control study, as the species usually involved is present throughout the human body.
Stealth pathogens
With the 1950s introduction of antibiotics, attempts to explain unexplained diseases via bacterial etiology seemed all the more unlikely. By the 1970s, however, it was established that antibiotics could trigger bacteria switch to their L phase. Eluding detection by traditional methods of medical microbiology, bacterial L forms and the similar mycoplasma—and, later, viruses—became the entities expected in the theory of focal infection. Yet until the 1980s, such researchers were scarce, largely due to scarce funding for such investigations.
Despite the limited funding, research established that L forms can adhere to red blood cells and thereby disseminate from foci within internal organs such as the spleen, or from oral tissues and the intestines, especially during dysbiosis. Perhaps some of Weston Price's identified "toxins" in endodontically treated teeth were L forms, thought nonexistent by bacteriologists of his time and widely overlooked into the 21st century. Apparently, dental infections, including by uncultured or cryptic microorganisms, contribute to systemic diseases.
Periodontal medicine
At the 1990s' emergence of epidemiological associations between dental infections and systemic diseases, American dentistry scholars have been cautious, some seeking successful intervention to confirm causality. Some American sources emphasized epidemiology's inability to determine causality, categorized the phenomena as progressive invasion of local tissues, and distinguished that from focal infection theory—which they assert was evaluated and disproved by the 1940s. Others have found focal infection theory's scientific evidence still slim, but have conceded that evolving science might establish it. Yet select American authors affirm the return of a modest theory of focal infection.
European sources find it more certain that dental infections drive systemic diseases, at least by driving systemic inflammation, and probably, among other immunologic mechanisms, by molecular mimicry resulting in antigenic crossreaction with host biomolecules, while some seemingly find progressive invasion of local tissues compatible with focal infection theory. Acknowledging that beyond epidemiological associations, successful intervention is needed to establish causality, they emphasize that biological explanation is needed atop both, and the biological aspect is thoroughly established already, such that general healthcare, as for cardiovascular disease, must address prevalent periodontal disease, a stance matched in Indian literature. Thus, there has emerged the concept periodontal medicine.
Dental controversies
During the 1980s, dentist Hal Huggins, sparking severe controversy, spawned biological dentistry, which claims that conventional tooth extraction routinely leaves within the tooth socket the periodontal ligament that often becomes gangrenous, then, forming a jawbone cavitation seeping infectious and toxic material. Sometimes forming elsewhere in bones after injury or ischemia, jawbone cavitations are recognized as foci also in osteopathy and in alternative medicine, but conventional dentists generally conclude them nonexistent. Although the International Academy of Oral Medicine & Toxicology claims that the scientific evidence establishing existence of jawbone cavitations is overwhelming and even published in textbooks, the diagnosis and related treatment remain controversial, and allegations of quackery persist.
Huggins and many biological dentists also espouse Weston Price's findings on endodontically treated teeth routinely being foci of infection, although these dentists have been accused of quackery. Conventional belief is that microorganisms within inaccessible regions of a tooth's roots are rendered harmless once entrapped by the filling material, although little evidence supports this. A H Rogers in 1976 and E H Ehrmann in 1977 had dismissed any relation between endodontics and focal infection. At dentist George Meinig's 1994 book, Root Canal Cover-Up, discussing researches of Rosenow and of Price, some dentistry scholars reasserted that the claims were evaluated and disproved by the 1940s. Yet Meinig was but one of at least three authors who in the early 1990s independently renewed the concern.
Boyd Haley and Curt Pendergrass reporting finding especially high levels of bacterial toxins in root-filled teeth. Although such possibility appears especially likely amid compromised immunity—as in individuals cirrhotic, asplenic, elderly, rheumatoid arthritic, or using steroid drugs—there remained a lack of carefully controlled studies definitely establishing adverse systemic effects. Conversely, some if few studies have investigated effects of systemic disease on root-canal therapy's outcomes, which tend to worsen with poor glycemic control, perhaps via impaired immune response, a factor largely ignored until recently, but now recognized as important. Still, even by 2010, "the potential association between systemic health and root canal therapy has been strongly disputed by dental governing bodies and there remains little evidence to substantiate the claims".
The traditional root-filling material is gutta-percha, whereas a new material, Biocalex, drew initial optimism even in alternative dentistry, but Biocalex-filled teeth were later reported by Boyd Haley to likewise seep toxic byproducts of anaerobic bacterial metabolism. Seeking to sterilize the tooth interior, some dentists, both alternative and conventional, have applied laser technology. Although endodontic therapy can fail and eventually often does, dentistry scholars maintain that it can be performed without creating focal infections. And even by 2010, molecular methods had rendered no consensus reports of bacteremia traced to asymptomatic endodontic infection. In any event, the predominant view is that shunning endodonthic therapy or routinely extracting endodontically treated teeth to treat or prevent systemic diseases remains unscientific and misguided.
Footnotes
Diseases of oral cavity, salivary glands and jaws
Epidemiology | Focal infection theory | Environmental_science | 5,437 |
32,897,897 | https://en.wikipedia.org/wiki/Vapreotide | Vapreotide (Sanvar) is a synthetic somatostatin analog. It is used in the treatment of esophageal variceal bleeding in patients with cirrhotic liver disease and AIDS-related diarrhea.
It is an 8 residue peptide with sequence H-D-Phe-Cys(1)-Tyr-D-Trp-Lys-Val-Cys(1)-Trp-NH2.
References
Antineoplastic drugs
Lactams
Peptides
Somatostatin inhibitors | Vapreotide | Chemistry | 113 |
53,724,921 | https://en.wikipedia.org/wiki/Talithia%20Williams | Talithia D. Williams is an American statistician and mathematician at Harvey Mudd College who researches the spatiotemporal structure of data. She was the first black woman to achieve tenure at Harvey Mudd College. Williams is an advocate for engaging more African Americans in engineering and science.
Education
Her educational background includes a bachelor's degree in Mathematics from Spelman College, Master's degrees in both Mathematics from Howard University and Statistics from Rice University, and a Ph.D. in Statistics from Rice University. Williams was in one of the first EDGE cohorts. She is a winner of the Thomas J. Watson Fellowship.
Career and research
Williams has worked at the Jet Propulsion Laboratory (JPL), the National Security Agency (NSA), and NASA. She is an associate professor of mathematics and also serves as Associate Dean for Research and Experiential Learning at Harvey Mudd College. She is Secretary and Treasurer for the EDGE Foundation which sponsors summer programs for women, and on the boards of the MAA and SACNAS. Williams has done significant outreach, with the goal of bringing mathematics to life and "rebranding the field of mathematics as anything but dry, technical or male-dominated but instead a logical, productive career path that is crucial to the future of the country."
Williams has developed statistical models focused on understanding the structure of spatiotemporal data, with environmental applications. She has partnered with the World Health Organization in developing a cataract model used to predict the cataract surgical rate for countries in Africa.
Williams was a host of the six part PBS series NOVA Wonders in April 2018. She is the author of the book Power in Numbers: The Rebel Women of Mathematics (Race Point Publishing, 2018). Williams was the narrator for the five-part PBS series NOVA Universe Revealed in November 2021.
TED talk
In 2014, Williams gave a highly viewed TED talk titled "Own Your Body's Data", discussing the potential insights to be gained from collecting personal health data.
Honors
In 2015 Williams received the MAA Henry L. Alder Award for exemplary teaching by an early career mathematics professor. Williams was honored by the Association for Women in Mathematics and the Mathematical Association of America, when they selected her to be the AWM/MAA Falconer Lecturer at MathFest 2017 in Chicago, IL. The title of her talk is "Not So Hidden Figures: Unveiling Mathematical Talent." Williams was also recognized by Mathematically Gifted & Black as a Black History Month 2017 Honoree. She received the 2022 Joint Policy Board for Mathematics Communication Award "for bringing mathematics and statistics into the homes of millions through her work as a TV host, renowned speaker, and author."
References
20th-century American mathematicians
21st-century American mathematicians
African-American mathematicians
African-American women mathematicians
Spelman College alumni
Rice University alumni
Harvey Mudd College faculty
Living people
Year of birth missing (living people)
20th-century American women mathematicians
21st-century American women mathematicians
Data activism | Talithia Williams | Technology | 606 |
21,438,687 | https://en.wikipedia.org/wiki/Drug-induced%20autoimmune%20hemolytic%20anemia | Drug-induced autoimmune hemolytic anemia also known as Drug-induced immune hemolytic anemia (DIIHA) is a rare cause of hemolytic anemia. It is difficult to differentiate from other forms of anemia which can lead to delays in diagnosis and treatment. Many different types of antibiotics can cause DIIHA and discontinuing the offending medication is the first line of treatment. DIIHA has is estimated to affect one to two people per million worldwide.
In some cases, a drug can cause the immune system to mistakenly think the body's own red blood cells are dangerous, foreign substances. Antibodies then develop against the red blood cells. The antibodies attach to red blood cells and cause them to break down too early. It is known that more than 150 drugs can cause this type of hemolytic anemia. The list includes:
Cephalosporins (a class of antibiotics)
Dapsone
Levodopa
Levofloxacin
Methyldopa
Nitrofurantoin
Nonsteroidal anti-inflammatory drugs (NSAIDs) - among them, the commonly used Diclofenac and Ibuprofen
Phenazopyridine (pyridium)
Quinidine
Signs and symptoms
Initial symptoms of drug-induced autoimmune hemolytic anemia are typically vague and reflect mild, moderate, or severe anemia. Symptoms of DIIHA can manifest within hours to months of the initial drug exposure. DIIHA ranges in severity from severe intravascular hemolysis to milder presentations of extravascular hemolysis. Common symptoms of DIIHA are fatigue, shortness of breath, dizziness, bloody or dark urine, weakness, and palpitations. DIIHA will occasionally present as hemoglobinuria with chills, however this is quite rare. Patients with DIIHA may appear pale and have jaundice. Hepatomegaly, splenomegaly, and adenopathy have also been observed.
When DIIHA is not recognized promptly it can have life-threatening complications such as hemolysis leading to shock, ischemia, acute respiratory distress syndrome, disseminated intravascular coagulation, and acute renal failure.
Causes
As of 2020 over 130 drugs have been reported to cause DIIHA. That number will continue to rise as new drugs are discovered. Antimicrobials are responsible for 42% of DIIHA cases, making them the most common cause. Nonsteroid anti-inflammatory drugs cause about 15% of cases and antineoplastic drugs cause around 11%.
Mechanism
The main mechanism of DIIHA is the development of antibodies. Drug-induced antibodies can be classified into two groups, drug-dependent antibodies and drug-independent autoantibodies. Drug-dependent antibodies are common in DIIHA. They require the offending drug to be present in order to bind and lyse cells.
Drug-independent autoantibodies are a less common factor in DIIHA. Drug-independent autoantibodies are found in Drug-induced autoimmune hemolytic anemia because of beta-lactamase inhibitors and platinum-based chemotherapeutics. These autoantibodies can sometimes bind and react to red blood cells even in the absence of whatever drug triggered the anemia.
Diagnosis
Drug-induced autoimmune hemolytic anemia causes a significant drop in hemoglobin and hematocrit. Occasionally DIIHA can present with mild leukocytosis. In its earlier stages patients with DIIHA will have low reticulocytes. As HIIHA progresses reticulocytes increase leading to an elevated mean corpuscular volume. Indirect bilirubin and Lactate dehydrogenase become elevated. LFTs occasionally become elevated. In some cases, a peripheral blood smear may show schistocytes, anisocytosis, polychromasia, or poikilocytosis.
Direct antiglobulin testing is the only way to confirm DIIHA. Direct antiglobulin testing can determine if complement C3 antibody and/or immunoglobulin G is bound to the red blood cell membrane. A positive direct antiglobulin test differentiates immune-mediated hemolytic anemia from a nonimmune-mediated cause. Other situations such as liver disease, post-transfusion or immunoglobulin administration, renal disease, and malignancy can cause a positive direct antiglobulin test. If both complement C3 Antibodies and immunoglobulin G are positive or if only immunoglobulin G is positive then warm antibody autoimmune hemolytic anemia must be considered as a differential diagnosis.
Treatment
An appropriate course of treatment for drug-induced autoimmune hemolytic anemia hasn't yet been established. Once DIIHA has been recognized, the patient must stop whatever drug caused the anemia in order to provide proper treatment. Patients should be given blood transfusions as needed. The use of thromboprophylaxis is encouraged because despite being anemic, patients are often hypercoagulable. Although corticosteroids have been used to treat DIIHA it is difficult to differentiate how much effects corticosteroids actually have on DIIHA.
If drug-independent autoantibodies are involved and stopping the offending agents results in no response then intravenous immunoglobulins and immunosuppressants such as rituximab, azathioprine, cyclophosphamide, cyclosporine, danazol, and mycophenolate can be used. Improvement is typically seen within a few weeks of cessation of the offending drug.
See also
Drug-induced nonautoimmune hemolytic anemia
Warm antibody autoimmune hemolytic anemia
Autoimmune hemolytic anemia
References
Further reading
External links
Acquired hemolytic anemia
Drug-induced diseases
Autoimmune hemolytic anemia | Drug-induced autoimmune hemolytic anemia | Chemistry | 1,264 |
28,385,525 | https://en.wikipedia.org/wiki/Spare%20part | A spare part, spare, service part, repair part, or replacement part, is an interchangeable part that is kept in an inventory and used for the repair or refurbishment of defective equipment/units. Spare parts are an important feature of logistics engineering and supply chain management, often comprising dedicated spare parts management systems.
Spare parts are an outgrowth of the industrial development of interchangeable parts and mass production.
In an industrial environment, spare parts are described in several manner to distinguish key features of various spare parts. The following describes spare part types and their typically functionality.
1. Capital parts are spare parts which, although acknowledged to have a long life or a small chance of failure, would cause a long shutdown of equipment because it would take a long time to get a replacement for them. Capital parts are typically repaired or replaced during planned overhauls/scheduled inspections. As description implies, these Capital Parts are typically expensive and are depreciated over time.
Examples of capital parts include pumps and motor sets used in industrial plants, or impeller or a rotor required for a pump or motor. This “spare” requirement would be determined by redundancy of equipment used in the industrial processes.
2. Consumables can be divided into two groups:
Operational consumables are typically consumed during operation and an example of these would be air filters, grease and lubricants, light bulbs, etc. (for a car, it would be washer fluid)
Inspection consumables are typically replaced during planned overhauls/scheduled inspections and an example of these would be fan belt, gaskets, lube oil, oil filters, etc. (for a car, it would be engine oil or transmission oil)
3. Inspection spares or outage spares typically refer to those spare parts used in conjunction with Capital Parts during planned overhauls/scheduled inspections and maybe reused but typically are not repairable and are discarded after removal from use if Inspection Spares are damaged. These Inspection Spares are sometimes mis-characterized as Capital spares (vs Capital Parts) and are also confounded with Inspection Consumables, which must be replaced at every inspection/outage. (an example of inspection spares would be bearings and mechanical seals, large bolts and nuts.)
4. Operational spares typically refer to those spare parts that are used during operation of equipment and would not require planned overhauls/scheduled inspections to replace. In an industrial setting, operational spares would be gages, valves (solenoid, MOVs that are in redundancy), transmitters, I/O boards, small AC/DC power supplies, etc.) (for a car, it would windshield wiper)
Classification
In logistics, spare parts can be broadly classified into two groups, repairables and consumables.
Economically, there is a tradeoff between the cost of ordering a replacement part and the cost of repairing a failed part. When the cost of repair becomes a significant percentage of the cost of replacement, it becomes economically favorable to simply order a replacement part. In such cases, the part is said to be "beyond economic repair" (BER), and the percentage associated with this threshold is known as the BER rate. Analysis of economic tradeoffs is formally evaluated using Level of Repair Analysis (LORA).
Repairable
Repairable parts are parts that are deemed worthy of repair, usually by virtue of economic consideration of their repair cost. Rather than bear the cost of completely replacing a finished product, repairables typically are designed to enable more affordable maintenance by being more modular. That allows components to be more easily removed, repaired, and replaced, enabling cheaper replacement. Spare parts that are needed to support condemnation of repairable parts are known as replenishment spares .
A rotable pool is a pool of repairable spare parts inventory set aside to allow for multiple repairs to be accomplished simultaneously, which can be used to minimize stockout conditions for repairable items.
Consumable
Parts that are not repairable are considered consumable parts. Consumable parts are usually scrapped, or "condemned", when they are found to have failed. Since no attempt at repair is made, for a fixed mean time between failures (MTBF), replacement rates for consumption of consumables are higher than an equivalent item treated as a repairable part. Therefore, consumables tend to be lower-cost items. One Example is in heavy machinery such as brake oils, hydraulic fluids, and belts.
Because consumables are lower cost and higher volume, economies of scale can be found by ordering in large lot sizes, a so-called economic order quantity.
Commercial classification
From a commercial perspective, spare parts can be classified into three main types:
OEM (Original Equipment Manufacturer) Parts: These parts are produced by the same manufacturer that made the original equipment.
Aftermarket Parts: These are replacement parts made by companies other than the original manufacturer. They can serve as cost-effective substitutes for OEM parts.
Used or Second-Hand Parts: These can be either OEM or aftermarket parts that have been refurbished and resold at a lower price.
Legislation
There is no UK or EU legislation which states that spare parts have to be available for any set period of time, but some trade associations require their members to ensure products are not rendered useless because spare parts are not available. The 'six year rule' in the UK Sale of Goods Act 1979 relates to the time period for enforcing claims that goods were defective when sold, not to whether spare parts are available to repair them, and section 23(3) of the Consumer Rights Act 2015 states that a consumer cannot require a trader to repair or replace goods if "the repair or replacement is impossible", implying that if spare parts are no longer available the consumer's Right to Repair (or to have a spare part supplied) would be lost.
Repair cycle
From the perspective of logistics, a model of the life cycle of parts in a supply chain can be developed. This model, called the repair cycle, consists of functioning parts in use by equipment operators, and the entire sequence of suppliers or repair providers that replenish functional part inventories, either by production or repair, when they have failed. Ultimately, this sequence ends with the manufacturer. This type of model allows demands on a supply system to ultimately be traced to their operational reliability, allowing for analysis of the dynamics of the supply system, in particular, spare parts.
Inventory management
Cannibalization
When stockout conditions occur, cannibalization can result. This is the practice of removing parts or subsystems necessary for repair from another similar device, rather than from inventory. The source system is usually crippled as a result, if only temporarily, in order to allow the recipient device to function properly again. As a result, operational availability is impaired.
Commercial
Industrialization has seen the widespread growth of commercial manufacturing enterprises, such as the automotive industry, and later, the computer industry. The resulting complex systems have evolved modular support infrastructures, with the reliance on auto parts in the automotive industry, and replaceable computer modules known as field-replaceable units (FRUs).
Military
Military operations are significantly affected by logistics operations. The system availability, also known as mission capable rate, of weapon systems and the ability to effect the repair of damaged equipment are significant contributors to the success of military operations. Systems that are in a mission-incapable (MICAP) status due lack of spare parts are said to be "awaiting parts" (AWP), also known as not mission capable due to supply (NMCS).
Because of this sensitivity to logistics, militaries have sought to make their logistics operations as effective as possible, focusing effort on operations research and optimal maintenance. Maintenance has been simplified by the introduction of interchangeable modules known as line-replaceable units (LRUs). LRUs make it possible to quickly replace an unserviceable (failed) part with a serviceable (working) replacement. This makes it relatively straightforward to repair complex military hardware, at the expense of having a ready supply of spare parts.
The cost of having serviceable parts available in inventory can be tremendous, as items that are prone to failure may be demanded frequently from inventory, requiring significant inventory levels to avoid depletion. For military programs, the cost of spare inventory can be a significant portion of acquisition cost.
In recent years, the United States Department of Defense (DoD) has advocated the use of performance-based logistics (PBL) contracts to manage costs for support of weapon systems.
See also
Buffer stock scheme
Carrier onboard delivery
Complete knock down
Demand chain
Flight spare
Logistics support analysis
Military surplus
Overstock
Reorder point
Safety stock
Service life (lifespan)
Spare tire
Underway replenishment
Warranty
War Reserve Stock
References
Systems engineering
Costs
Inventory | Spare part | Engineering | 1,790 |
53,399,325 | https://en.wikipedia.org/wiki/Seymour%20Kirkup | Seymour Stocker Kirkup (1788–1880) was an English painter and antiquarian, resident in Italy from 1816.
Life
Born in London, he was the eldest child of Joseph Kirkup, a jeweller and diamond merchant there. He was admitted a student of the Royal Academy in 1809, and obtained a medal in 1811 for a drawing in its antique school. He became at this period acquainted with William Blake and Benjamin Haydon.
About 1816 Kirkup began to suffer from pulmonary weakness, and; after his father's death, visited Italy. He eventually settled there, living some time at Rome, where his friend Charles Eastlake was studying. There he knew John Keats (but missed his funeral on 26 February 1821, ill in bed) and in 1822 attended the funeral of Percy Bysshe Shelley. At Florence he lived for many years in a house on the River Arno, adjoining the Ponte Vecchio.
Kirkup became a leader of a literary circle in Florence and took up residence at the Casa Carovana, a palazzo near the Ponte Vecchio. He collected a library, of which a catalogue was printed in 1871, and maintained a copious correspondence. Walter Savage Landor, Robert and Elizabeth Browning, Giovanni Aubrey Bezzi, Edward John Trelawny, Joseph Severn were his friends. As a keen student of Dante, he was a disciple of Gabriele Rossetti.
On Italian unification, Kirkup was created cavaliere of the Order of Saints Maurice and Lazarus; he subsequently affected the title "barone". He was short, and good-looking as a young man; in later life, eccentric in his dress and habits, and deaf. He was a believer in spiritualism, and a follower of the medium Daniel Dunglas Home.
Kirkup died at 4 Via Scali del Ponte Nuovo, Livorno, where he had lived since 1872, on 3 January 1880, and was buried on 5 January in the British cemetery there.
Works
Kirkup was a capable artist, but practised painting as a dilettante. He sent to the Royal Academy in 1833 a picture "Cassio", and in 1836 a lady's portrait. He also published etchings. He drew many portraits of his friends, including Trelawny and the journalist John Scott, and in 1844 made a self-portrait.
In 1840 Kirkup, Bezzi, and the American Henry Wilde, had permission to search for the portrait of Dante, painted according to tradition by Giotto, in the chapel of the Palazzo del Podestà in Florence. On 21 July 1840 they found it, and Kirkup made a drawing and tracing, before restoration work in 1841. The Arundel Society issued a reproduction from Kirkup's sketch, which was also engraved by P. Lasinio. Kirkup gave the tracing to Rossetti, who handed it on to his son Dante Gabriel Rossetti, and it was sold after the latter's death. Kirkup made some of the designs for Lord Vernon's edition of Dante's works.
Family
Kirkup, by his first wife, Regina Ronti of Florence, who died 30 October 1856, aged 19, had a daughter, Imogene who married Teodoro Cioni of Livorno and who died in 1878, leaving two children. On 16 February 1875, at the age of 87, Kirkup married Paolina Carboni, aged 22, daughter of Pasquale Carboni, English vice-consul at Rome. After he died she married Signor Morandi of Bologna.
Notes
External links
Attribution
1788 births
1880 deaths
Artists from London
English antiquarians
English painters
Draughtsmen
Recipients of the Order of Saints Maurice and Lazarus | Seymour Kirkup | Engineering | 754 |
58,681,644 | https://en.wikipedia.org/wiki/Johanna%20Stachel | Johanna Barbara Stachel (born 3 December 1954 in Munich) is a German nuclear physicist. She is a professor in experimental physics at the University of Heidelberg (Ruprecht-Karls-Universität Heidelberg). Stachel is a former president of the German Physical Society (DPG).
Early life and education
Johanna Stachel completed secondary education in 1972 at the Spohn Gymnasium in Ravensburg. She studied physics and chemistry at Johannes Gutenberg University Mainz and the Swiss Federal Institute of Technology (ETH Zürich) and received a degree from the Johannes Gutenberg University Mainz in 1978. In 1982 she obtained a doctorate in physics from the same university.
Career
From 1983 to 1996, Stachel studied and worked at the State University of New York (SUNY) at Stony Brook and Brookhaven National Laboratory where she met her future husband professor Peter Braun-Munzinger. In 1996 she was named professor at the University of Heidelberg. Stachel was spokesperson of the CERN SPS experiment CERES/NA45 and directed the developed the ALICE Transition Radiation Detector.
Stachel was elected President of the German Physical Society for a two-year term starting in 2012. Her two primary priorities as president were first to defend basic research by showing its importance and promote physics education in schools by improving the training of physics teachers and the standards across German schools.
During her career, she has delivered over 150 lectures at international workshops and conferences and has participated in over 100 seminars and colloquia.
Research interests
Stachel's research focuses on understanding relativistic heavy-ion collisions and quark-gluon plasma. She is member of the LHC ALICE Collaboration at CERN in Geneva. Stachel is also interested in developing detectors that are needed to carry out these experiments in particle physics.
Offices and honorary offices
Among her academic responsibilities are:
From 2003 to 2005 she was dean of the faculty of Physics and Astronomy of the University of Heidelberg. Until 2012, she continued to serve as vice-dean.
Associate editor for the journal Nuclear Physics A.
Membership of the Advisory Board of the Wilhelm and Else Heraeus Foundation.
Membership of the University Council of the Vienna University of Technology starting in 2018 for a 5-year term.
International councilor with the American Physical Society from 2016–2019.
On 28 March 2014 she received the honorary membership of the Physikalischen Verein, Frankfurt, where she is listed along with Heinrich Hertz, Albert Einstein and Otto Stern.
Honors and awards
1986: Sloan Research Fellowship
1988: Presidential Young Investigator Award
1996: Fellow of the American Physical Society
1998: Member of the Berlin-Brandenburg Academy of Sciences and Humanities
1999: Order of Merit of the Federal Republic of Germany
2001: Lautenschläger Research Prize
2012: Member of the Academia Europaea
2014: Lise Meitner Prize
2014: Full member of the Heidelberg Academy of Sciences and Humanities
2015: Member of the Academy of Sciences Leopoldina
2017/18: Lise Meitner Lecture in Vienna
2019: Stern-Gerlach-Medal
2021: Officers Cross of the Order of Merit of the Federal Republic of Germany
References
External links
Biography at American Physical Society
Johanna Stachel publications indexed by Inspire-HEP
1954 births
Fellows of the American Physical Society
20th-century German physicists
German women physicists
Academic staff of Heidelberg University
Living people
Members of Academia Europaea
Members of the German National Academy of Sciences Leopoldina
Particle physicists
People associated with CERN
Officers Crosses of the Order of Merit of the Federal Republic of Germany
Women nuclear physicists
ETH Zurich alumni
21st-century German physicists
Presidents of the German Physical Society
Johannes Gutenberg University Mainz alumni | Johanna Stachel | Physics | 742 |
70,811,442 | https://en.wikipedia.org/wiki/Pauljensenia%20hongkongensis | Pauljensenia hongkongensis is a Gram-positive, strictly anaerobic and non-spore-forming species of bacteria from the family Actinomycetaceae.
References
Actinomycetales
Monotypic bacteria genera
Bacteria described in 2018 | Pauljensenia hongkongensis | Biology | 54 |
43,742,110 | https://en.wikipedia.org/wiki/Baumgartner%27s%20axiom | In mathematical set theory, Baumgartner's axiom (BA) can be one of three different axioms introduced by James Earl Baumgartner.
A subset of the real line is said to be -dense if every two points are separated by exactly other points, where is the smallest uncountable cardinality. This would be true for the real line itself under the continuum hypothesis. An axiom introduced by states that all -dense subsets of the real line are order-isomorphic, providing a higher-cardinality analogue of Cantor's isomorphism theorem that countable dense subsets are isomorphic. Baumgartner's axiom is a consequence of the proper forcing axiom. It is consistent with a combination of ZFC, Martin's axiom, and the negation of the continuum hypothesis, but not implied by those hypotheses.
Another axiom introduced by states that Martin's axiom for partially ordered sets MAP(κ) is true for all partially ordered sets P that are countable closed, well met and ℵ1-linked and all cardinals κ less than 2ℵ1.
Baumgartner's axiom A is an axiom for partially ordered sets introduced in . A partial order (P, ≤) is said to satisfy axiom A if there is a family ≤n of partial orderings on P for n = 0, 1, 2, ... such that
≤0 is the same as ≤
If p ≤n+1q then p ≤nq
If there is a sequence pn with pn+1 ≤n pn then there is a q with q ≤n pn for all n.
If I is a pairwise incompatible subset of P then for all p and for all natural numbers n there is a q such that q ≤n p and the number of elements of I compatible with q is countable.
References
Axioms of set theory | Baumgartner's axiom | Mathematics | 392 |
44,425,089 | https://en.wikipedia.org/wiki/Decision%20Model%20and%20Notation | In business analysis, the Decision Model and Notation (DMN) is a standard published by the Object Management Group. It is a standard approach for describing and modeling repeatable decisions within organizations to ensure that decision models are interchangeable across organizations.
The DMN standard provides the industry with a modeling notation for decisions that will support decision management and business rules. The notation is designed to be readable by business and IT users alike. This enables various groups to effectively collaborate in defining a decision model:
the business people who manage and monitor the decisions,
the business analysts or functional analysts who document the initial decision requirements and specify the detailed decision models and decision logic,
the technical developers responsible for the automation of systems that make the decisions.
The DMN standard can be effectively used standalone but it is also complementary to the BPMN and CMMN standards. BPMN defines a special kind of activity, the Business Rule Task, which "provides a mechanism for the process to provide input to a business rule engine and to get the output of calculations that the business rule engine might provide" that can be used to show where in a BPMN process a decision defined using DMN should be used.
DMN has been made a standard for Business Analysis according to BABOK v3.
Elements of the standard
The standard includes three main elements
Decision Requirements Diagrams that show how the elements of decision-making are linked into a dependency network.
Decision tables to represent how each decision in such a network can be made.
Business context for decisions such as the roles of organizations or the impact on performance metrics.
A Friendly Enough Expression Language (FEEL) that can be used to evaluate expressions in a decision table and other logic formats.
Use cases
The standard identifies three main use cases for DMN
Defining manual decision making
Specifying the requirements for automated decision-making
Representing a complete, executable model of decision-making
Benefits
Using the DMN standard will improve business analysis and business process management, since
other popular requirement management techniques such as BPMN and UML do not handle decision making
growth of projects using business rule management systems or BRMS, which allow faster changes
it facilitates better communications between business, IT and analytic roles in a company
it provides an effective requirements modeling approach for Predictive Analytics projects and fulfills the need for "business understanding" in methodologies for advanced analytics such as CRISP-DM
it provides a standard notation for decision tables, the most common style of business rules in a BRMS
Relationship to BPMN
DMN has been designed to work with BPMN. Business process models can be simplified by moving process logic into decision services. DMN is a separate domain within the OMG that provides an explicit way to connect to processes in BPMN. Decisions in DMN can be explicitly linked to processes and tasks that use the decisions. This integration of DMN and BPMN has been studied extensively. DMN expects that the logic of a decision will be deployed as a stateless, side-effect free Decision Service. Such a service can be invoked from a business process and the data in the process can be mapped to the inputs and outputs of the decision service.
DMN BPMN example
As mentioned, BPMN is a related OMG Standard for process modeling. DMN complements BPMN, providing a separation of concerns between the decision and the process. The example here describes a BPMN process and DMN DRD (Decision Requirements Diagram) for onboarding a bank customer. Several decisions are modeled and these decisions will direct the processes response.
New bank account process
In the BPMN process model shown in the figure, a customer makes a request to open a new bank account. The account application provides the account representative with all the information needed to create an account and provide the requested services. This includes the name, address and various forms of identification. In the next steps of the work flow, the 'Know Your Customer' (KYC) services are called.
In the 'KYC' services, the name and address are validated; followed by a check against the international criminal database (Interpol) and the database of persons that are 'Politically exposed persons (PEP)'. The PEP is a person who is either entrusted with a prominent political position or a close relative thereof. Deposits from persons on the PEP list are potentially corrupt. This is shown as two services on the process model. Anti-money-laundering (AML) regulations require these checks before the customer account is certified.
The results of these services plus the forms of identification are sent to the Certify New Account decision. This is shown as a 'rule' activity, verify account, on the process diagram. If the new customer passes certification, then the account is classified into onboarding for Business Retail, Retail, Wealth Management and High Value Business. Otherwise the customer application is declined. The Classify New Customer Decision classifies the customer.
If the verify-account process returns a result of 'Manual' then the PEP or the Interpol check returned a close match. The account representative must visually inspect the name and the application to determine if the match is valid and accept or decline the application.
Certify new account decision
An account is certified for opening if the individual's' address is verified, and if valid identification is provided, and if the applicant is not on a list of criminals or politically exposed persons. These are shown as sub-decisions below the 'certify new account' decision. The account verification services provides a 100% match of the applicants address.
For identification to be valid, the customer must provide a driver's license, passport or government issued ID.
The checks against PEP and Interpol are 'Fuzzy' matches and return matching score values. Scores above 85 are considered a 'match' and scores between 65 and 85 would require a 'manual' screening process. People who match either of these lists are rejected by the account application process. If there is a partial match with a score between 65 and 85, against the Interpol or PEP list then the certification is set to manual and an account representative performs a manual verification of the applicant's data. These rules are reflected in the figure below, which presents the decision table for whether to pass the provided name for the lists checks.
Client category
The client's on-boarding process is driven by what category they fall in. The category is decided by the:
Type of client, business or private
The size of the funds on deposit
And the estimated net worth
This decision is shown below:
There are 6 business rules that determine the client's category and these are shown in the decision table here:
Summary example
In this example, the outcome of the 'Verify Account' decision directed the responses of the new account process. The same is true for the 'Classify Customer' decision. By adding or changing the business rules in the tables, one can easily change the criteria for these decisions and control the process differently.
Modeling is a critical aspect of improving an existing process or business challenge. Modeling is generally done by a team of business analysts, IT personnel, and modeling experts. The expressive modeling capabilities of BPMN allows business analyst to understand the functions of the activities of the process. Now with the addition of DMN, business analysts can construct an understandable model of complex decisions. Combining BPMN and DMN yields a very powerful combination of models that work synergistically to simplify processes.
Relationship to decision mining and process mining
Automated discovery techniques that infer decision models from process execution data have been proposed as well. Here, a DMN decision model is derived from a data-enriched event log, along with the process that uses the decisions. In doing so, decision mining complements process mining with traditional data mining approaches.
cDMN extension
Constraint Decision Model and Notation (cDMN) is a formal notation for expressing knowledge in a tabular, intuitive format.
It extends DMN with constraint reasoning and related concepts while aiming to retain the user-friendliness of the original.
cDMN is also meant to express other problems besides business modeling, such as complex component design.
It extends DMN in four ways:
Constraint modelling (see Constraint programming)
Adding expressive data representation, such as typed predicates and functions (similar to First-order logic)
Data tables, in which each entry represents a different problem instance
Quantification
Due to these additions, cDMN models can express more complex problems. Furthermore, they can also express some DMN models in more compact, less-convoluted ways.
Unlike DMN, cDMN is not deterministic, in the sense that a set of input values could have multiple different solutions.
Indeed, where a DMN model always defines a single solution, a cDMN model defines a solution space.
Usage of cDMN models can also be integrated in Business Process Model and Notation process models, just like DMN.
Example
As an example, consider the well-known map coloring or Graph coloring problem.
Here, we wish to color a map in such a way that no bordering countries share the same color.
The constraint table shown in the figure (as denoted by its E* hit policy in the top-left corner) expresses this logic.
It is read as "For each country c1, country c2 holds that if they are different countries which border, then the color of c1 is not the color of c2.
Here, the first two columns introduce two quantifiers, both of type country, which serve as universal quantifier.
In the third column, the 2-ary predicate borders is used to express when two countries have a shared border.
Finally, the last column uses the 1-ary function color of, which maps each country on a color.
References
External links
DMN specifications published by Object Management Group
DMN Technology Capability Kit: Test platform for evaluating DMN standard conformance of DMN software products
cDMN on readthedocs.io
Enterprise modelling
Diagrams
Decision-making
Rule engines
Analytics
Business analysis
Modeling languages | Decision Model and Notation | Engineering | 2,063 |
29,852,649 | https://en.wikipedia.org/wiki/Stockmayer%20potential | The Stockmayer potential is a mathematical model for representing the interactions between pairs of atoms or molecules. It is defined as a Lennard-Jones potential with a point electric dipole moment.
A Stockmayer liquid consists of a collection of spheres with point dipoles embedded at the centre of each. These spheres
interact both by Lennard-Jones and dipolar interactions. In the absence of the point dipoles, the spheres face no rotational
friction and the translational dynamics of such LJ spheres have been studied in detail. This system, therefore, provides
a simple model where the only source of rotational friction is dipolar interactions.
The interaction potential may be written as
where the parameters and are related to dispersion strength and particle size respectively, just as in the Mie potential or Lennard-Jones potential, which is the source of the first term, is the dipole moment of species , and is a parameter describing the relative orientation of the two dipoles, which may vary between -2 and 2.
References
M. E. Van Leeuwe "Deviation from corresponding-states behaviour for polar fluids", Molecular Physics 82 pp. 383-392 (1994)
Reinhard Hentschke, Jörg Bartke, and Florian Pesth "Equilibrium polymerization and gas-liquid critical behavior in the Stockmayer fluid", Physical Review E 75 011506 (2007)
Quantum mechanical potentials
Theoretical chemistry
Molecular physics | Stockmayer potential | Physics,Chemistry | 292 |
34,984,725 | https://en.wikipedia.org/wiki/AISoy1 | AISoy1, developed by the Spanish company AISoy Robotics, is a pet-robot considered to be one of the first emotional-learning robots for the consumer market. Its software platform allows it to interpret stimuli from its sensors network, in order to gain information from its environment and take decisions based on logical and emotional criteria. Unlike other previously developed robots, AISoy1 has not a single collection of programmed answers, but its behavior is dynamic and results unpredictable. The dialogue system and the advanced recognition system allow it to interact with humans as well as robots.
Features
The robot is based on the Linux operating system with a Texas Instruments OMAP 3503 (ARM Cortex A8) processor at 600 MHz. It has a 1 GB NAND FLASH memory, 256 MB MOBILE DDR SDRAM, and a microSD card slot to increase its memory.
AISoy1 has various sensors, including temperature, 3D orientation, environmental brightness, touch and force sensors. It is equipped with a radio communication module as well as an integrated 1Mpx camera, allowing it to visually recognize users.
Human Interaction
The AISoy1 robots develop a unique personality based on their past experiences with the user. As they develop a relationship with their user, they improve their speech ability and feeling capacities. They are able to display up to 14 different emotional states.
AISoy1 can be commanded to perform different activities through voice commands, such as engaging in games, playing music, or saving information. Users wanting to extend the functionality of AISoy1 can do so through the platform DIA, which allows quick creation of programs through a graphical tool. More advanced users can integrate hardware and develop complex programs through the SDK from AISoy.
References
External links
Official Website of AISoy Robotics (Spanish and English).
Social robots
Robotic animals
2010 robots
Robots of Spain | AISoy1 | Technology,Biology | 374 |
11,548,952 | https://en.wikipedia.org/wiki/Censoring%20%28statistics%29 | In statistics, censoring is a condition in which the value of a measurement or observation is only partially known.
For example, suppose a study is conducted to measure the impact of a drug on mortality rate. In such a study, it may be known that an individual's age at death is at least 75 years (but may be more). Such a situation could occur if the individual withdrew from the study at age 75, or if the individual is currently alive at the age of 75.
Censoring also occurs when a value occurs outside the range of a measuring instrument. For example, a bathroom scale might only measure up to 140 kg. If a 160 kg individual is weighed using the scale, the observer would only know that the individual's weight is at least 140 kg.
The problem of censored data, in which the observed value of some variable is partially known, is related to the problem of missing data, where the observed value of some variable is unknown.
Censoring should not be confused with the related idea truncation. With censoring, observations result either in knowing the exact value that applies, or in knowing that the value lies within an interval. With truncation, observations never result in values outside a given range: values in the population outside the range are never seen or never recorded if they are seen. Note that in statistics, truncation is not the same as rounding.
Types
Left censoring – a data point is below a certain value but it is unknown by how much.
Interval censoring – a data point is somewhere on an interval between two values.
Right censoring – a data point is above a certain value but it is unknown by how much.
Type I censoring occurs if an experiment has a set number of subjects or items and stops the experiment at a predetermined time, at which point any subjects remaining are right-censored.
Type II censoring occurs if an experiment has a set number of subjects or items and stops the experiment when a predetermined number are observed to have failed; the remaining subjects are then right-censored.
Random (or non-informative) censoring is when each subject has a censoring time that is statistically independent of their failure time. The observed value is the minimum of the censoring and failure times; subjects whose failure time is greater than their censoring time are right-censored.
Interval censoring can occur when observing a value requires follow-ups or inspections. Left and right censoring are special cases of interval censoring, with the beginning of the interval at zero or the end at infinity, respectively.
Estimation methods for using left-censored data vary, and not all methods of estimation may be applicable to, or the most reliable, for all data sets.
A common misconception with time interval data is to class as left censored intervals where the start time is unknown. In these cases we have a lower bound on the time interval, thus the data is right censored (despite the fact that the missing start point is to the left of the known interval when viewed as a timeline!).
Analysis
Special techniques may be used to handle censored data. Tests with specific failure times are coded as actual failures; censored data are coded for the type of censoring and the known interval or limit. Special software programs (often reliability oriented) can conduct a maximum likelihood estimation for summary statistics, confidence intervals, etc.
Epidemiology
One of the earliest attempts to analyse a statistical problem involving censored data was Daniel Bernoulli's 1766 analysis of smallpox morbidity and mortality data to demonstrate the efficacy of vaccination. An early paper to use the Kaplan–Meier estimator for estimating censored costs was Quesenberry et al. (1989), however this approach was found to be invalid by Lin et al. unless all patients accumulated costs with a common deterministic rate function over time, they proposed an alternative estimation technique known as the Lin estimator.
Operating life testing
Reliability testing often consists of conducting a test on an item (under specified conditions) to determine the time it takes for a failure to occur.
Sometimes a failure is planned and expected but does not occur: operator error, equipment malfunction, test anomaly, etc. The test result was not the desired time-to-failure but can be (and should be) used as a time-to-termination. The use of censored data is unintentional but necessary.
Sometimes engineers plan a test program so that, after a certain time limit or number of failures, all other tests will be terminated. These suspended times are treated as right-censored data. The use of censored data is intentional.
An analysis of the data from replicate tests includes both the times-to-failure for the items that failed and the time-of-test-termination for those that did not fail.
Censored regression
An earlier model for censored regression, the tobit model, was proposed by James Tobin in 1958.
Likelihood
The likelihood is the probability or probability density of what was observed, viewed as a function of parameters in an assumed model. To incorporate censored data points in the likelihood the censored data points are represented by the probability of the censored data points as a function of the model parameters given a model, i.e. a function of CDF(s) instead of the density or probability mass.
The most general censoring case is interval censoring: , where is the CDF of the probability distribution, and the two special cases are:
left censoring:
right censoring:
For continuous probability distributions:
Example
Suppose we are interested in survival times, , but we don't observe for all . Instead, we observe
, with and if is actually observed, and
, with and if all we know is that is longer than .
When is called the censoring time.
If the censoring times are all known constants, then the likelihood is
where = the probability density function evaluated at ,
and = the probability that is greater than , called the survival function.
This can be simplified by defining the hazard function, the instantaneous force of mortality, as
so
.
Then
.
For the exponential distribution, this becomes even simpler, because the hazard rate, , is constant, and . Then:
,
where .
From this we easily compute , the maximum likelihood estimate (MLE) of , as follows:
.
Then
.
We set this to 0 and solve for to get:
.
Equivalently, the mean time to failure is:
.
This differs from the standard MLE for the exponential distribution in that the any censored observations are considered only in the numerator.
See also
Data analysis
Detection limit
Imputation (statistics)
Inverse probability weighting
Sampling bias
Saturation arithmetic
Survival analysis
Winsorising
References
Further reading
Blower, S. (2004), D, Bernoulli's " ", Reviews of Medical Virology, 14: 275–288
Bagdonavicius, V., Kruopis, J., Nikulin, M.S. (2011),"Non-parametric Tests for Censored Data", London, ISTE/WILEY,.
External links
"Engineering Statistics Handbook", NIST/SEMATEK,
Statistical data types
Survival analysis
Reliability engineering
Unknown content | Censoring (statistics) | Engineering | 1,528 |
60,921,299 | https://en.wikipedia.org/wiki/Hasse%E2%80%93Schmidt%20derivation | In mathematics, a Hasse–Schmidt derivation is an extension of the notion of a derivation. The concept was introduced by .
Definition
For a (not necessarily commutative nor associative) ring B and a B-algebra A, a Hasse–Schmidt derivation is a map of B-algebras
taking values in the ring of formal power series with coefficients in A. This definition is found in several places, such as , which also contains the following example: for A being the ring of infinitely differentiable functions (defined on, say, Rn) and B=R, the map
is a Hasse–Schmidt derivation, as follows from applying the Leibniz rule iteratedly.
Equivalent characterizations
shows that a Hasse–Schmidt derivation is equivalent to an action of the bialgebra
of noncommutative symmetric functions in countably many variables Z1, Z2, ...: the part of D which picks the coefficient of , is the action of the indeterminate Zi.
Applications
Hasse–Schmidt derivations on the exterior algebra of some B-module M have been studied by . Basic properties of derivations in this context lead to a conceptual proof of the Cayley–Hamilton theorem. See also .
References
Abstract algebra | Hasse–Schmidt derivation | Mathematics | 258 |
36,862,942 | https://en.wikipedia.org/wiki/8%20Vulpeculae | 8 Vulpeculae is star located about 457 light years away in the northern constellation of Vulpecula. It lies just from Alpha Vulpeculae and the two form an optical double. 8 Vulpeculae is visible to the naked eye as a faint, yellow-orange hued star with an apparent visual magnitude of 5.82. It is moving closer to the Earth with a heliocentric radial velocity of −29 km/s.
This is an aging giant star with a stellar classification of K0 III, which indicates it has exhausted the hydrogen supply at its core and evolved away from the main sequence. It is 324 million years old with three times the mass of the Sun and has expanded to 14 times the Sun's radius. The star is radiating 100 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 4,915 K.
References
External links
K-type giants
Vulpecula
Durchmusterung objects
Flamsteed objects
183491
095785
7406 | 8 Vulpeculae | Astronomy | 218 |
37,510,151 | https://en.wikipedia.org/wiki/Beckley%20Furnace%20Industrial%20Monument | The Beckley Furnace Industrial Monument is a state-owned historic site preserving a 19th-century iron-making blast furnace on the north bank of the Blackberry River in the town of North Canaan, Connecticut. The site became a state park in 1946; it was added to the National Register of Historic Places in 1978.
Description
The Beckley Furnace stands in what is now a rural area of central North Canaan, on the south side of Lower Road just west of its junction with Furnace Hill Road. The site spans the Blackberry River, with the main blast furnace and its developed features on the north bank. The main furnace is a large stone structure, tall and per side at the base, gradually sloping to at the top. It is set near the road, which runs at a high elevation above a stone retaining wall. About upriver is the dam, a stone structure with a penstock providing access to a turbine chamber. Further downstream are the remnants of two more dams and furnaces, and there are large piles of slag mounded on the south side of the river. No longer extant are wood-frame buildings that would have been needed to support the operations of the furnace.
History
The furnace was built for the production of pig iron by John Adam Beckley in 1847 and continued in operation until 1919. It was the second of three working blast furnaces built at the site; a fourth furnace was under construction in the early years of the 20th century but was never put in operation. The works successfully adapted to changing conditions, but was unable to compete on scale, and closed in the early 1920s. The stack was restored by the state in 1999. The dam built on the Blackberry River to provide power for the furnace and other industrial operations was repaired by the state in 2010.
Activities and amenities
The state park offers picnicking and pond fishing. Tours of the furnace are offered periodically by Friends of Beckley Furnace.
See also
National Register of Historic Places listings in Litchfield County, Connecticut
References
External links
Beckley Furnace Industrial Monument Connecticut Department of Energy and Environmental Protection
Friends of Beckley Furnace
State parks of Connecticut
Parks in Litchfield County, Connecticut
Protected areas established in 1946
Buildings and structures in Litchfield County, Connecticut
Furnaces
Industrial buildings completed in 1847
1847 establishments in Connecticut
National Register of Historic Places in Litchfield County, Connecticut
Industrial buildings and structures on the National Register of Historic Places in Connecticut
North Canaan, Connecticut
1946 establishments in Connecticut | Beckley Furnace Industrial Monument | Engineering | 487 |
31,170,897 | https://en.wikipedia.org/wiki/%22Dixeya%22%20nasuta | "Dixeya" nasuta is an extinct species of gorgonopsian (predatory therapsids, related to modern mammals) that lived during the Late Permian of East Africa, known from fossils found in what is now Tanzania. The species has a complicated taxonomic history, it was originally named as a second species of the genus Dixeya which is now considered a junior synonym of Aelurognathus. "D." nasuta itself, however, was not moved to Aelurognathus, and although it was instead tentatively referred to Arctognathus at first it has since been recognised to not belong to this genus either. This situation leaves "Dixeya" nasuta without a formal genus name. It was proposed to belong to a new distinct genus, named "Njalila", that was informally proposed for the species in a PhD thesis, but this name has not yet been formally published and is currently a nomen nudum. "D." nasuta has been characterised from other gorgonopsians by a combination of its straight snout profile, upturned and "pinched" nose, and curved jaw margin. The fossil record of the Usili Formation shows that the taxon was contemporary with many other gorgonopsians, even alongside large representatives such as Inostrancevia and rubidgeines.
History
"Dixeya" nasuta was originally named by German paleontologist Friedrich von Huene in 1950 from two skulls collected from the Usili Formation (formerly known as K6 or the Kawinga Formation) in the Ruhuhu Basin of Tanzania; GPIT/RE/7118 (designated the holotype specimen) and GPIT/RE/7119. Von Huene had erected these specimens as second species of Dixeya, a genus coined in 1927 by Sidney H. Haughton for its type species D. quadrata. In 1970, French paleontologist Denise Sigogneau-Russell re-assigned D. quadrata to Aelurognathus (as A. quadrata) in her systematic revision of gorgonopsian taxonomy, thus synonymising the two genera. However, she did not consider von Huene's D. nasuta to belong to Aelurognathus, and instead tentatively referred the species to Arctognathus as Arctognathus? nasuta. Furthermore, Sigogneau (1970) only considered the holotype of D. quadrata from Malawi to belong to Aelurognathus, and she did not consider two additional specimens referred to D. quadrata by von Huene in 1950 from Tanzania (GPIT/RE/7120 and GPIT/RE/7121) to belong to the same species. Instead, she suggested that they may also be referable to Arctognathus? nasuta—in addition to three other Tanzanian specimens (MZC 886, MZC 887, and MZC 876) referred to D. quadrata by Francis Rex Parrington in 1955. Nonetheless, Sigogneau was cautious in the referral of "D." nasuta to Arctognathus, and had previously acknowledged that the referral was not a resolved matter, especially in her opinion that the roof of the skull of "D." nasuta was not well preserved enough for comparison.
Later researchers agreed with Sigogneau's doubts and have acknowledged that "D." nasuta does not compare well to Arctognathus. In Eva Gebauer's unpublished 2007 Ph.D. thesis, she argued that "D." nasuta was distinct from other gorgonopsians and belonged to a new genus for which she informally proposed the nomen nudum "Njalila" (named after the Njalila, a tributary of the Ruhuhu River in Tanzania). Gebauer further proposed a novel second species of this genus, "N. insigna", based on a skull previously referred to another gorgonopsian, Scylacops capensis. Gebauer differentiated "N. insigna" from "N." nasuta by possessing thicker arches between its skull openings, a posteriorly wider skull, and a slightly more rounded snout profile. In 2015, paleontologist Christian F. Kammerer also agreed that "D." nasuta was not a species of Arctognathus, as well as that the Tanzanian specimens of D. quadrata likely belonged to this same species. However, he was more cautious regarding their taxonomy, noting that Gebauer's proposal of a novel genus required further study of the material and needed to be more rigorously phylogenetically tested first. From this, he urged that the taxon should be referred to as "Dixeya" nasuta until its taxonomy and relations could be resolved (reportedly under study as of 2015).
Description
"Dixeya" nasuta is only known by its skull and jaws, which measured roughly in length (mid-sized for a gorgonopsian) and had a relatively short and compact snout. Compared with similarly short-snouted gorgonopsians (such as Arctognathus and Eriphostoma) the skull is not as wide at the rear, with only weakly flaring zygomatic arches and little constriction of the snout behind the canines. As such, its skull appears much more straight-sided when viewed from above, and is also generally wider than it is tall. Similarly, the profile of the skull along the top of the snout is also largely straight, although the tip is characteristically turned up in a sharp point above the nostrils, which were positioned far-forwards on the snout. The snout is also distinctive for its unusual 'pinched' appearance. The nasal bones along the top of the snout are broad, but are constricted along the middle. Furthermore, the septomaxilla (a small bone found in and around the nostrils of therapsids) bulges strongly outwards under and behind the nostrils but then rapidly hollows out just behind them on either side, giving the bridge of the nose the pinched appearance.
Behind the snout, the roof of the skull is slightly concave and the rims of the orbits are noticeably raised above it. The orbits themselves are proportionately large and rounded, and face laterally out to the sides. The temporal fenestra, a hole in the skull behind the eye socket for jaw muscle attachment, is also very large. Subsequently, the bony arches surrounding and separating these openings (e.g. the suborbital arch, postorbital bar) are proportionately slender and thin. The parietal foramen ("third eye" opening) on top of the skull is large and surrounded by a raised boss of bone, and is positioned at the very back of the skull right above the occiput. The occiput itself (the back face of the skull) is tall and roughly rectangular in shape, slightly concave and only gently sloping.
As in other gorgonopsians, "D." nasuta has large blade-like caniniform teeth. The incisor teeth (five in each premaxilla), however, are smaller than those of related gorgonopsians, and it only had four to five small postcanine teeth. The jawline of the maxilla in the upper jaw is notably convex, with a much more exaggerated curve of the toothrow than in other gorgonopsians except for Arctognathus. This exaggerated curvature is due to the post-canine teeth being housed in a raised bony flange of the maxilla behind the canines. The maxilla also has an unusual groove over the postcanine teeth, starting shallowly above the first postcanine and running down to the edge of the bone behind the 5th postcanine, deepening along its length. Like other gorgonopsians "D." nasuta also possessed palatal teeth, three on each palatine and two on each pterygoid bones, with only weakly developed bosses supporting them. The vomer on the roof of the mouth is very broad at the front, but narrows rapidly to a constricted splint halfway down its length. This more resembles the vomer of the derived rubidgeines than the narrower vomer of earlier gorgonopsians. The vomer sports three ridges, one down its middle and two running along each edge.
The dentary bone of the lower jaw is comparatively slender, with a sloping mandibular symphysis that nonetheless bears the characteristic 'chin' of gorgonopsians. The reflected lamina of the angular bone towards the back of the jaw is only moderately ridged, in comparison to other gorgonopsians.
Classification
The phylogenetic relationships of "Dixeya" nasuta (as "Njalila") were analysed by Gebauer in 2007 in her unpublished PhD thesis, and was the first computerised phylogenetic analysis of gorgonopsians ever conducted. Gebauer found "D." nasuta as a member of the family Gorgonopsidae, which in her classification excluded the most basal genera of gorgonopsians in her tree that she regarded as plesiomorphic (i.e. representing the ancestral condition) for the group. Within Gorgonopsidae, "D." nasuta was a relatively derived member but outside of the clade including the giant and derived Rubidgeinae and Inostrancevia, occupying part of an evolutionary grade between them and more ancestral gorgonopsids. The results of Gebauer (2007) are depicted in the cladogram below. Two genera mentionned in this cladogram, i. e. Scylacognathus and Eoarctops, have since been synonymized with Eriphostoma since 2015.
The analysis of Gebauer (2007) was the first major attempt to perform a phylogenetic analysis of gorgonopsians, however its results have not been borne out by subsequent independent analyses. Namely, Kammerer (2016) regarded Gebauer's analysis as "unsatisfactory", citing that many of the characters used by her analysis were based upon skull proportions that are variable within taxa, both individually and ontogenetically (i.e. traits that change through growth). As an example of a potential problem created by this, he highlighted the basal position of Aloposaurus (a wastebasket taxon of various immature gorgonopsians) compared to the stratigraphically older and morphologically basal Eoarctops (now a junior synonym of Eriphostoma) being found in a relatively more derived position.
"D." nasuta has yet to be included in any later phylogenetic analyses of gorgonopsians, and in 2015 Kammerer commented that both its generic status and phylogenetic relationships amongst other gorgonopsians needed further study pending a full re-description before a generic assignment could be made.
Paleoecology
All known fossil specimens of "Dixeya" nasuta have been identified in the Usili Formation, Ruhuhu Basin, southern Tanzania. This formation, dating from the Upper Permian, is known to provide a fairly considerable number of fossils of various tetrapods. During this period, this formation would have been an alluvial plain which would have had numerous small meandering streams passing through well-vegetated floodplains. The basement of this formation would also have housed a generally high phreatic zone.
"D." nasuta was contemporary with many other gorgonopsians. These include Cyonosaurus, Gorgonops, Inostrancevia, Lycaenops, "Sauroctonus" parringtoni, Scylacops and the rubidgeines Aelurognathus, Dinogorgon, Rubidgea, Ruhuhucerberus and Sycosaurus The other theriodonts present are represented by the therocephalians Silphictidoides and Theriognathus as well as by the cynodont Procynosuchus.
The most numerous tetrapods in the formation are the dicynodonts, among which are Compsodon, Daptocephalus, Dicynodon, Dicynodontoides, Endothiodon, Euptychognathus, Geikia, Katumbia, Kawingasaurus, Oudenodon, Pristerodon, Rhachiocephalus and an indeterminate cryptodont. An undetermined biarmosuchian similar to Burnetia is also known. Therapsids are not the only tetrapods present in the Usili Formation. Indeed, sauropsids such as the archosauromorph Aenigmastropheus and the pareiasaurs Anthodon and Pareiasaurus are known. The only temnospondyl recorded is Peltobatrachus.
Notes
References
Gorgonopsia
Prehistoric therapsid genera
Articles with quotation marks in the title
Fossils of Tanzania
Lopingian synapsids of Africa
Lopingian genus first appearances
Lopingian genus extinctions
Nomina nuda | "Dixeya" nasuta | Biology | 2,747 |
29,202,868 | https://en.wikipedia.org/wiki/Dextran%201 | Dextran 1 is a hapten inhibitor that greatly reduces the risk for anaphylactic reactions when administering dextran.
Mechanism
Dextran 1 is composed of a small fraction (1 kilodalton) of the entire dextran complex. This is enough to bind anti-dextran antibodies but insufficient to result in the formation of immune complexes and resultant immune responses. Thereby, dextran 1 binds up antibodies towards dextran without causing the immune response, leaving less antibodies left to bind to the entire dextran complex, causing less risk of an immune response upon subsequent administration of dextran.
References
Immunology | Dextran 1 | Biology | 133 |
10,225,184 | https://en.wikipedia.org/wiki/Oxygen%20evolution | Oxygen evolution is the chemical process of generating elemental diatomic oxygen (O2) by a chemical reaction, usually from water, the most abundant oxide compound in the universe. Oxygen evolution on Earth is effected by biotic oxygenic photosynthesis, photodissociation, hydroelectrolysis, and thermal decomposition of various oxides and oxyacids. When relatively pure oxygen is required industrially, it is isolated by distilling liquefied air.
Natural oxygen evolution is essential to the biological process of all complex life on Earth, as aerobic respiration has become the most important biochemical process of eukaryotic thermodynamics since eukaryotes evolved through symbiogenesis during the Proterozoic eon, and such consumption can only continue if oxygen is cyclically replenished by photosynthesis. The various oxygenation events during Earth's history had not only influenced changes in Earth's biosphere, but also significantly altered the atmospheric chemistry. The transition of Earth's atmosphere from an anoxic prebiotic reducing atmosphere high in methane and hydrogen sulfide to an oxidative atmosphere of which free nitrogen and oxygen make up 99% of the mole fractions, had led to major climate changes and caused numerous icehouse phenomena and global glaciations.
In industries, oxygen evolution reaction (OER) is a limiting factor in the process of generating molecular oxygen through chemical reactions such as water splitting and electrolysis, and improved OER electrocatalysis is the key to the advancement of a number of renewable energy technologies such as solar fuels, regenerative fuel cells and metal–air batteries.
Oxygen evolution in nature
Photosynthetic oxygen evolution is the fundamental process by which oxygen is generated in the earth's biosphere. The reaction is part of the light-dependent reactions of photosynthesis in cyanobacteria and the chloroplasts of green algae and plants. It utilizes the energy of light to split a water molecule into its protons and electrons for photosynthesis. Free oxygen, generated as a by-product of this reaction, is released into the atmosphere.
Water oxidation is catalyzed by a manganese-containing cofactor contained in photosystem II, known as the oxygen-evolving complex (OEC) or the water-splitting complex. Manganese is an important cofactor, and calcium and chloride are also required for the reaction to occur. The stoichiometry of this reaction is as follows:
2H2O ⟶ 4e− + 4H+ + O2
The protons are released into the thylakoid lumen, thus contributing to the generation of a proton gradient across the thylakoid membrane. This proton gradient is the driving force for adenosine triphosphate (ATP) synthesis via photophosphorylation and the coupling of the absorption of light energy and the oxidation of water for the creation of chemical energy during photosynthesis.
History of discovery
It was not until the end of the 18th century that Joseph Priestley accidentally discovered the ability of plants to "restore" air that had been "injured" by the burning of a candle. He followed up on the experiment by showing that air "restored" by vegetation was "not at all inconvenient to a mouse." He was later awarded a medal for his discoveries that "...no vegetable grows in vain... but cleanses and purifies our atmosphere." Priestley's experiments were further evaluated by Jan Ingenhousz, a Dutch physician, who then showed that the "restoration" of air only worked while in the presence of light and green plant parts.
Water electrolysis
Together with hydrogen (H2), oxygen is evolved by the electrolysis of water. The point of water electrolysis is to store energy in the form of hydrogen gas, a clean-burning fuel. The "oxygen evolution reaction (OER) is the major bottleneck [to water electrolysis] due to the sluggish kinetics of this four-electron transfer reaction." All practical catalysts are heterogeneous.
Electrons (e−) are transferred from the cathode to protons to form hydrogen gas. The half reaction, balanced with acid, is:
2 H+ + 2e− → H2
At the positively charged anode, an oxidation reaction occurs, generating oxygen gas and releasing electrons to the anode to complete the circuit:
2 H2O → O2 + 4 H+ + 4e−
Combining either half reaction pair yields the same overall decomposition of water into oxygen and hydrogen:
Overall reaction:
2 H2O → 2 H2 + O2
Chemical oxygen generation
Although some metal oxides eventually release O2 when heated, these conversions generally require high temperatures. A few compounds release O2 at mild temperatures. Chemical oxygen generators consist of chemical compounds that release O2 when stimulated, usually by heat. They are used in submarines and commercial aircraft to provide emergency oxygen. Oxygen is generated by the high-temperature decomposition of sodium chlorate:
2 NaClO3 → 2 NaCl + 3 O2
Potassium permanganate also releases oxygen upon heating, but the yield is modest.
2 KMnO4 → MnO2 + K2MnO4 + O2
See also
Geological history of oxygen
Great Oxygenation Event
Neoproterozoic oxygenation event
Silurian-Devonian Terrestrial Revolution
Oxygen cycle
References
External links
Plant Physiology Online, 4th edition: Topic 7.7 - Oxygen Evolution
Oxygen evolution - Lecture notes by Antony Crofts, UIUC
Evolution of the atmosphere – Lecture notes, Regents of the University of Michigan
How to make oxygen and hydrogen from water using electrolysis
Photosynthesis
Breathing gases
Oxygen
Biological evolution | Oxygen evolution | Chemistry,Biology | 1,174 |
12,604,168 | https://en.wikipedia.org/wiki/Continual%20power%20system | A continual power system is a system for reliably supplying uninterrupted power. Examples of a continual power system include uninterruptible power supplies and emergency power systems. The need for continual power systems has risen because more and more essential services depend on consistent power, such as lighting, computing, and communications.
Continual power systems are used because energy provider's roles and responsibilities are not rigorously defined.
The key to reliable power systems is to avoid power disturbances, such as deviation of voltage or current in an ideal single-frequency sine wave with constant amplitude and frequency.
In a study conducted in 2011 with Flemish households, researchers found that a relatively small share were willing to accept lower reliability in return for a small bill discount.
Flywheel
An example of a continual power system is the flywheel, which is common on colocation sites. These consist of an electric motor, a flywheel, a generator, and a diesel engine. In normal operation, the electric motor, supplied from the grid, turns the flywheel which in turn, turns the generator. In the event of generator failure, the flywheel keeps the generator turning while the diesel engine restarts. The flywheel is an effective way of governing the Flywheel Energy Storage System (FESS) for wind power smoothing. It stores in the range of 89-93% of the mean state of charge which means that as the blades on the flywheel turn, between 89-93% of the energy is stored.
Turbines
A turbine is a set of blades that are forced to turn by an external energy source. When the blades start turning, the shaft to which they are connected starts to spin, and the generator then creates electricity. Examples of external forces that can power turbines include wind, water, steam, and gas. Turbines can be used in creating a continual power system because as long as the blades turn, power is created.
Microbial fuel cells
Microbial fuel cells create energy when bacteria break down organic material. This produces a charge that is transferred to the cell's anode. Human saliva, which has much organic material, can be used to power a tiny microbial fuel cell. This can produce sufficient energy to run on-chip applications. This application can be used in things such as biomedical devices and cell phones.
A study evaluated microbial fuel cells to create electricity and treat wastewater. During a five-month time period, a sucrose-based solution continuously generated electricity of 170mW/m2. Power density grew with increasing chemical oxygen demand up to 2.0g COD/day with no increase in power density after that. This shows that while this system can continuously provide electricity, it has its limitations.
References
Electric power | Continual power system | Physics,Engineering | 550 |
48,813,282 | https://en.wikipedia.org/wiki/Color%20clock | The color clock, or color timer, is a part of the video circuitry of computer graphics hardware that works with analog color television systems. The clock is timed to match the timing of the color standard it works with, typically NTSC or PAL, ensuring that the data being read from the computer memory to create the image on-screen is in sync with the display. Depending on the speed of the color clock, the product of the resolution and number of colors is defined. Slow color clocks of many early games consoles and home computers resulted in limited color palettes at the highest resolutions.
References
Computer graphics
Television technology | Color clock | Technology | 124 |
23,434,758 | https://en.wikipedia.org/wiki/C9H11NO2 | {{DISPLAYTITLE:C9H11NO2}}
The molecular formula C9H11NO2 (molar mass: 165.18 g/mol, exact mass: 165.078979) may refer to:
Benzocaine
Ethenzamide
Methylenedioxyphenethylamine
Metolcarb
Norsalsolinol
3,4-Methylenedioxy-N-methylbenzylamine, closely related to isosafrole.
Phenylalanine
D-Phenylalanine
References | C9H11NO2 | Chemistry | 116 |
494,745 | https://en.wikipedia.org/wiki/Petrus%20Apianus | Petrus Apianus (April 16, 1495 – April 21, 1552), also known as Peter Apian, Peter Bennewitz, and Peter Bienewitz, was a German humanist, known for his works in mathematics, astronomy and cartography. His work on "cosmography", the field that dealt with the earth and its position in the universe, was presented in his most famous publications, Astronomicum Caesareum (1540) and Cosmographicus liber (1524). His books were extremely influential in his time, with the numerous editions in multiple languages being published until 1609. The lunar crater Apianus and asteroid 19139 Apian are named in his honour.
Life and work
Apianus was born as Peter Bienewitz (or Bennewitz) in Leisnig in Saxony; his father, Martin, was a shoemaker. The family was relatively well off, belonging to the middle-class citizenry of Leisnig. Apianus was educated at the Latin school in Rochlitz. From 1516–1519 he studied at the University of Leipzig; during this time, he Latinized his name to Apianus (lat. apis means "bee"; "Biene" is the German word for bee).
In 1519, Apianus moved to Vienna and continued his studies at the University of Vienna, which was considered one of the leading universities in geography and mathematics at the time and where Georg Tannstetter taught. When the plague broke out in Vienna in 1521, he completed his studies with a B.A. and moved to Regensburg and then to Landshut. At Landshut, he produced his Cosmographicus Liber (1524), a highly respected work on astronomy and navigation which was to see more than 40 reprints in four languages (Latin; French, 1544; Dutch, 1545; Spanish, 1548) and that remained popular until the end of the 16th century. Later editions were produced by Gemma Frisius.
In 1527, Peter Apianus was called to the University of Ingolstadt as a mathematician and printer. His print shop started small. Among the first books he printed were the writings of Johann Eck, Martin Luther's antagonist. This print shop was active between 1543 and 1540 and became well known for its high-quality editions of geographic and cartographic works. It is thought that he used stereotype printing techniques on woodblocks. The printer's logo included the motto Industria superat vires in Greek, Hebrew, and Latin around the figure of a boy.
Through his work, Apianus became a favourite of emperor Charles V, who had praised Cosmographicus liber at the Imperial Diet of 1530 and granted him a printing monopoly in 1532 and 1534. In 1535, the emperor made Apianus an armiger, i.e. granted him the right to display a coat of arms. In 1540, Apianus printed the Astronomicum Caesareum, dedicated to Charles V. Charles promised him a truly royal sum (3,000 golden guilders), appointed him his court mathematician, and made him a Reichsritter (a free imperial knight) and in 1544 even an Imperial Count Palatine. All this furthered Apianus's reputation as an eminent scientist. Astronomicum Caesareum is noted for its visual appeal. Printed and bound decoratively, with about 100 known copies, it included several Volvelles that allowed users to calculate dates, the positions of constellations and so on. Apianus noted that it took a month to produce some of the plates. Thirty-five octagonal paper cut instruments were included with woodcuts that are thought to have been made by Hans Brosamer () who may have trained under Lucas Cranach, Sr. in Wittemberg. It also incorporated star and constellation names from the work of the Arab astronomer Azophi (Abd al-Rahman al-Sufi Apianus is also remembered for publishing the only known depiction of the Bedouin constellations in 1533. On this map Ursa Minor is an old woman and three maidens, Draco is four camels, and Cepheus was illustrated as a shepherd with sheep and a dog.
Despite many calls from other universities, including Leipzig, Padua, Tübingen, and Vienna, Apianus remained in Ingolstadt until his death. He neglected his teaching duties. Apianus's work included in mathematics – in 1527 he published a variation of Pascal's triangle, and in 1534 a table of sines – as well as astronomy. In 1531, he observed Halley's Comet and noted that a comet's tail always point away from the sun. Girolamo Fracastoro also detected this in 1531, but Apianus's publication was the first to also include graphics. He designed sundials, published manuals for astronomical instruments and crafted volvelles ("Apian wheels"), measuring instruments useful for calculating time and distance for astronomical and astrological applications.
Apianus married Katharina Mosner, the daughter of a councilman of Landshut, in 1526. They had fourteen children together – five girls and nine sons. One of their children was Philipp Apian (1531–1589), who preserved the legacy of his father, in addition to his own research.
Works
(also called Cosmographia)
Ein newe und wolgegründete underweisung aller Kauffmanns Rechnung in dreyen Büchern, mit schönen Regeln und fragstücken begriffen, Ingolstadt 1527. A handbook of commercial arithmetic; depicted in the painting The Ambassadors by Hans Holbein the Younger.
Cosmographiae introductio, cum quibusdam Geometriae ac Astronomiae principiis ad eam rem necessariis, Ingolstadt 1529.
Ein kurtzer bericht der Observation unnd urtels des jüngst erschinnen Cometen..., Ingolstadt 1532. On his comet observations.
Quadrans Apiani astronomicus, Ingolstadt 1532. On quadrants.
Horoscopion Apiani..., Ingolstadt 1533. On sundials.
Instrument Buch..., Ingolstadt 1533. A scientific book on astronomical instruments in German.
. On trigonometry, contains sine tables.
Footnotes
References
Further reading
Röttel, K. (Ed.): Peter Apian: Astronomie, Kosmographie und Mathematik am Beginn der Neuzeit, Polygon-Verlag 1995; . In German.
Peter and Philipp Apian, in German.
Ralf Kern. Wissenschaftliche Instrumente in ihrer Zeit. Volume 1: Vom Astrolab zum mathematischen Besteck. Cologne, 2010.
External links
Petrus Apianus.
Astronomicum Caesareum at the library of the ETH Zurich.
Astronomicum Caesareum at Rare Book Room.
Astronomicum Caesareum, Ingolstadt 1540 da www.atlascoelestis.com
Electronic facsimile-editions of the rare book collection at the Vienna Institute of Astronomy
Online Galleries, History of Science Collections, University of Oklahoma Libraries High resolution images of works by and/or portraits of Petrus Apianus in .jpg and .tiff format.
Horoscopion Apiani Generale…, Ingolstadt 1533 da www.atlascoelestis.com
Cosmographiae Introductio, 1537 from the Collections at the Library of Congress
Cosmographia, 1544 (1st edition was 1524)
1495 births
1552 deaths
People from Leisnig
16th-century German astronomers
German Renaissance humanists
16th-century German mathematicians
German scientific instrument makers
16th-century cartographers
University of Vienna alumni
Academic staff of the University of Ingolstadt
16th-century German writers
16th-century German male writers
Astronomical instrument makers | Petrus Apianus | Astronomy | 1,667 |
1,252,438 | https://en.wikipedia.org/wiki/Web%20modeling | Web modeling (aka model-driven Web development) is a branch of Web engineering that addresses the specific issues related to design and development of large-scale Web applications. In particular, it focuses on the design notations and visual languages that can be used for the realization of robust, well-structured, usable and maintainable Web applications.
Models
Designing a data-intensive website amounts to specifying its characteristics in terms of various orthogonal abstractions. The main models that are involved in complex Web application design are: data structure, content composition, navigation paths, and presentation model. Several languages and notations have been devised for Web application modeling. Among them:
RMM
OOHDM
ARANEUS
STRUDEL
TIRAMISU
HDM — W2000
the Interaction Flow Modeling Language (IFML), adopted by the Object Management Group (OMG) in March 2013
WebML
Hera
UML Web Application Extension
UML-based Web Engineering (UWE)
ACE
WebArchitect
OO-H
One of the main discussion venues for this discipline is the Model-Driven Web Engineering Workshop (MDWE) held yearly in conjunction with the International Conference on Web Engineering (ICWE) conference.
References
External links
Most Common Website Performance Issues
ADA Website Compliance & Web Accessibility
Web design | Web modeling | Engineering | 259 |
3,676,061 | https://en.wikipedia.org/wiki/Inorganic%20Crystal%20Structure%20Database | Inorganic Crystal Structure Database (ICSD) is a chemical database founded in 1978 by Günter Bergerhoff at the University of Bonn in Germany and I. D. Brown at McMaster University in Canada. It is now produced by FIZ Karlsruhe in Europe and the U.S. National Institute of Standards and Technology. It seeks to contain information on all inorganic crystal structures published since 1913, including pure elements, minerals, metals, and intermetallic compounds (with atomic coordinates). ICSD contains over 210,000 entries and is updated twice a year.
A Windows-based PC version has been developed in co-operation with the National Institute of Standards and Technology (NIST), and a PHP-MySQL web based version in co-operation with the Institut Laue–Langevin (ILL) Grenoble.
See also
Crystallographic database
References
External links
ICSD Fiz
ICSD NIST
Inorganic chemistry
Chemical databases
Crystallographic databases | Inorganic Crystal Structure Database | Chemistry,Materials_science | 194 |
604,238 | https://en.wikipedia.org/wiki/Data%20Protection%20Act%201998 | The Data Protection Act 1998 (c. 29) (DPA) was an act of Parliament of the United Kingdom designed to protect personal data stored on computers or in an organised paper filing system. It enacted provisions from the European Union (EU) Data Protection Directive 1995 on the protection, processing, and movement of data.
Under the 1998 DPA, individuals had legal rights to control information about themselves. Most of the Act did not apply to domestic use, such as keeping a personal address book. Anyone holding personal data for other purposes was legally obliged to comply with this Act, subject to some exemptions. The Act defined eight data protection principles to ensure that information was processed lawfully.
It was superseded by the Data Protection Act 2018 (DPA 2018) on 23 May 2018. The DPA 2018 supplements the EU General Data Protection Regulation (GDPR), which came into effect on 25 May 2018. The GDPR regulates the collection, storage, and use of personal data significantly more strictly.
Background
The 1998 act replaced the (c. 35) and the (c. 37). Additionally, the 1998 act implemented the EU Data Protection Directive 1995.
The Privacy and Electronic Communications (EC Directive) Regulations 2003 altered the consent requirement for most electronic marketing to "positive consent" such as an opt-in box. Exemptions remain for the marketing of "similar products and services" to existing customers and enquirers, which can still be permitted on an opt-out basis.
The Jersey data protection law, the Data Protection (Jersey) Law 2005 was modelled on the United Kingdom's law.
Contents
Scope of protection
Section 1 of DPA 1998 defined "personal data" as any data that could have been used to identify a living individual. Anonymised or aggregated data was less regulated by the Act, provided the anonymisation or aggregation had not been done reversibly. Individuals could have been identified by various means including name and address, telephone number, or email address. The Act applied only to data which was held, or was intended to be held, on computers ("equipment operating automatically in response to instructions given for that purpose"), or held in a "relevant filing system".
In some cases, paper records could have been classified as a relevant filing system, such as an address book or a salesperson's diary used to support commercial activities.
The Freedom of Information Act 2000 modified the act for public bodies and authorities, and the Durant case modified the interpretation of the act by providing case law and precedent.
A person who had their data processed had the following rights:
under section 7, to view the data on them held by an organisation for a reasonable fee: the maximum fee was £2 for requests to credit reference agencies, £50 for health and educational request, and £10 per individual otherwise,
under section 14, to request that incorrect information be corrected. If the company ignored the request, a court could have ordered the data to be corrected or destroyed, and in some cases compensation could have been awarded.
under section 10, to require that their data was not used in any way that potentially could have caused damage or distress.
under section 11, to require that their data was not used for direct marketing.
Data protection principles
Schedule 1 listed eight "data protection principles":
Personal data shall be processed fairly and lawfully and, in particular, shall not be processed unless:
at least one of the conditions in Schedule 2 is met, and
in the case of sensitive personal data, at least one of the conditions in Schedule 3 is also met.
Personal data shall be obtained only for one or more specified and lawful purposes, and shall not be further processed in any manner incompatible with that purpose or those purposes.
Personal data shall be adequate, relevant and not excessive in relation to the purpose or purposes for which they are processed.
Personal data shall be accurate and, where necessary, kept up to date.
Personal data processed for any purpose or purposes shall not be kept for longer than is necessary for that purpose or those purposes.
About the rights of individuals e.g. personal data shall be processed in accordance with the rights of data subjects (individuals).
Appropriate technical and organisational measures shall be taken against unauthorised or unlawful processing of personal data and against accidental loss or destruction of, or damage to, personal data.
Personal data shall not be transferred to a country or territory outside the European Economic Area unless that country or territory ensures an adequate level of protection for the rights and freedoms of data subjects in relation to the processing of personal data.
Broadly speaking, these eight principles were similar to the six principles set out in the GDPR of 2016.
Conditions relevant to the first principle
Personal data should only be processed fairly and lawfully. In order for data to be classed as 'fairly processed', at least one of these six conditions had to be applicable to that data (Schedule 2).
The data subject (the person whose data is stored) has consented ("given their permission") to the processing;
Processing is necessary for the performance of, or commencing, a contract;
Processing is required under a legal obligation (other than one stated in the contract);
Processing is necessary to protect the vital interests of the data subject;
Processing is necessary to carry out any public functions;
Processing is necessary in order to pursue the legitimate interests of the "data controller" or "third parties" (unless it could unjustifiably prejudice the interests of the data subject).
Consent
Except under the exceptions mentioned below, the individual had to consent to the collection of their personal information and its use in the purpose(s) in question. The European Data Protection Directive defined consent as “…any freely given specific and informed indication of his wishes by which the data subject signifies his agreement to personal data relating to him being processed", meaning the individual could have signified agreement other than in writing. However, non-communication should not have been interpreted as consent.
Additionally, consent should have been appropriate to the age and capacity of the individual and other circumstances of the case. If an organisation "intends to continue to hold or use personal data after the relationship with the individual ends, then the consent should cover this." When consent was given, it was not assumed to last forever, though in most cases, consent lasted for as long as the personal data needed to be processed, and individuals may have been able to withdraw their consent, depending on the nature of the consent and the circumstances in which the personal information was collected and used.
The Data Protection Act also specified that sensitive personal data must have been processed according to a stricter set of conditions, in particular, any consent must have been explicit.
Exceptions
The Act was structured such that all processing of personal data was covered by the act while providing a number of exceptions in Part IV. Notable exceptions were:
Section 28 – National security. Any processing for the purpose of safeguarding national security is exempt from all the data protection principles, as well as Part II (subject access rights), Part III (notification), Part V (enforcement), and Section 55 (Unlawful obtaining of personal data).
Section 29 – Crime and taxation. Data processed for the prevention or detection of crime, the apprehension or prosecution of offenders, or the assessment or collection of taxes are exempt from the first data protection principle.
Section 36 – Domestic purposes. Processing by an individual only for the purposes of that individual's personal, family or household affairs is exempt from all the data protection principles, as well as Part II (subject access rights) and Part III (notification).
Police and court powers
The Act granted or acknowledged various police and court powers.
Section 29 – Consent of the data subject was not required when processing personal data to prevent or detect crime, apprehend or prosecute offenders, the assessment and collection of taxes and duties and discharge a statutory function.
Section 35 – Disclosures required by law or made in connection with legal proceedings. This included obeying court orders and other laws, and were part of legal proceedings.
Offences
The Act detailed a number of civil and criminal offences for which data controllers may have been liable if a data controller failed to gain appropriate consent from a data subject. However, consent was not specifically defined in the Act and so was a common law matter.
Section 21(1) made it an offence to process personal information without registration.
Section 21(2) made it an offence to fail to comply with the notification regulations made by the Secretary of State (proposed by the Information Commissioner under section 25 of the Act).
Section 55 made the acquisition of personal data unlawful, and made the acquisition of unauthorised access to personal data an offence for people (other parties), such as hackers and impersonators, outside the organisation.
Section 56 made it a criminal offence to require an individual to make a Subject Access Request relating to cautions or convictions for the purposes of recruitment, continued employment, or the provision of services. This section was enforced on 10 March 2015.
Complexity
The UK Data Protection Act was a large Act that had a reputation for complexity. While the basic principles were honored for protecting privacy, interpreting the act was not always simple. Many companies, organisations, and individuals seemed very unsure of the aims, content, and principles of the Act. Some refused to provide even very basic, publicly available material, quoting the Act as a restriction. The Act also impacted the way in which organisations conducted business in terms of who should have been contacted for marketing purposes, not only by telephone and direct mail, but also electronically. This has led to the development of permission-based marketing strategies.
Definition of personal data
The definition of personal data was data relating to a living individual who can be identified
from that data; or
from that data plus other information that was in the possession, or likely to come into the possession, of the data controller.
Sensitive personal data concerned the subject's race, ethnicity, politics, religion, trade union status, health, sexual history, or criminal record.
Subject access requests
The Information Commissioner's Office website stated regarding subject access requests: "You have the right to find out if an organisation is using or storing your personal data. This is called the right of access. You exercise this right by asking for a copy of the data, which is commonly known as making a 'subject access request.'"
Before the General Data Protection Regulation (GDPR) came into force on 25 May 2018, organisations could have charged a specified fee for responding to a SAR of up to £10 for most requests. Following GDPR: "A copy of your personal data should be provided free. An organisation may charge for additional copies. It can only charge a fee if it thinks the request is 'manifestly unfounded or excessive'. If so, it may ask for a reasonable fee for administrative costs associated with the request."
Information Commissioner
Compliance with the Act was regulated and enforced by an independent authority, the Information Commissioner's Office, which maintained guidance relating to the Act.
EU’s Article 29 Working Party
In January 2017, the Information Commissioner's Office invited public comments on the EU's Article 29 Working Party's proposed changes to data protection law and the anticipated introduction of extensions to the interpretation of the Act, the Guide to the General Data Protection Regulation.
See also
Data Protection Act, 2012 (Ghana)
Computer Misuse Act 1990
Data privacy
Data Protection Directive (EU)
Freedom of Information Act 2000
Gaskin v United Kingdom
List of UK government data losses
Privacy and Electronic Communications (EC Directive) Regulations 2003
General Data Protection Regulation – a 2016 EU regulation on data protection
Smith v Lloyds TSB Bank plc
References
External links
Information Commissioner's Office
The Department for Constitutional Affairs
Council of Europe – ETS no. 108 – Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (1981) – basis for Data Protection Act 1984
Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data – basis for Data Protection Act 1998
UK legislation
Data laws of the United Kingdom
Data protection
Information privacy
United Kingdom Acts of Parliament 1998 | Data Protection Act 1998 | Engineering | 2,463 |
41,442,019 | https://en.wikipedia.org/wiki/Bayesian%20operational%20modal%20analysis | Bayesian operational modal analysis (BAYOMA) adopts a Bayesian system identification approach for operational modal analysis (OMA). Operational modal analysis aims at identifying the modal properties (natural frequencies, damping ratios, mode shapes, etc.) of a constructed structure using only its (output) vibration response (e.g., velocity, acceleration) measured under operating conditions. The (input) excitations to the structure are not measured but are assumed to be 'ambient' ('broadband random'). In a Bayesian context, the set of modal parameters are viewed as uncertain parameters or random variables whose probability distribution is updated from the prior distribution (before data) to the posterior distribution (after data). The peak(s) of the posterior distribution represents the most probable value(s) (MPV) suggested by the data, while the spread of the distribution around the MPV reflects the remaining uncertainty of the parameters.
Pros and cons
In the absence of (input) loading information, the identified modal properties from OMA often have significantly larger uncertainty (or variability) than their counterparts identified using free vibration or forced vibration (known input) tests. Quantifying and calculating the identification uncertainty of the modal parameters become relevant.
The advantage of a Bayesian approach for OMA is that it provides a fundamental means via the Bayes' Theorem to process the information in the data for making statistical inference on the modal properties in a manner consistent with modeling assumptions and probability logic.
The potential disadvantage of Bayesian approach is that the theoretical formulation can be more involved and less intuitive than their non-Bayesian counterparts. Algorithms are needed for efficient computation of the statistics (e.g., mean and variance) of the modal parameters from the posterior distribution. Unlike non-Bayesian methods, the algorithms are often implicit and iterative. E.g., optimization algorithms may be involved in the determination of most probable value, which may not converge for poor quality data.
Methods
Bayesian formulations have been developed for OMA in the time domain and in the frequency domain using the spectral density matrix and fast Fourier transform (FFT) of ambient vibration data. Based on the formulation for FFT data, fast algorithms have been developed for computing the posterior statistics of modal parameters. Recent developments based on EM algorithm show promise for simpler algorithms and reduced coding effort. The fundamental precision limit of OMA has been investigated and presented as a set of uncertainty laws which can be used for planning ambient vibration tests.
Connection with maximum likelihood method
Bayesian method and maximum likelihood method (non-Bayesian) are based on different philosophical perspectives but they are mathematically connected; see, e.g., and Section 9.6 of. For example,
Assuming a uniform prior, the most probable value (MPV) of parameters in a Bayesian method is equal to the location where the likelihood function is maximized, which is the estimate in Maximum Likelihood Method
Under a Gaussian approximation of the posterior distribution of parameters, their covariance matrix is equal to the inverse of Hessian of the negative log of likelihood function at the MPV. Generally, this covariance depends on data. However, if one assumes (hypothetically; non-Bayesian) that the data is indeed distributed as the likelihood function, then for large data size it can be shown that the covariance matrix is asymptotically equal to the inverse of the Fisher information matrix (FIM) of parameters (which has a non-Bayesian origin). This coincides with the Cramer–Rao bound in classical statistics, which gives the lower bound (in the sense of matrix inequality) of the ensemble variance of any unbiased estimator. Such lower bound can be reached by maximum-likelihood estimator for large data size.
In the above context, for large data size the asymptotic covariance matrix of modal parameters depends on the 'true' parameter values (a non-Bayesian concept), often in an implicit manner. It turns out that by applying further assumptions such as small damping and high signal-to-noise ratio, the covariance matrix has mathematically manageable asymptotic form, which provides insights on the achievable precision limit of OMA and can be used to guide ambient vibration test planning. This is collectively referred as 'uncertainty law'.
See also
Operational modal analysis
Bayesian inference
Ambient vibrations
Microtremor
Modal analysis
Modal testing
Notes
See monographs on non-Bayesian OMA and Bayesian OMA
See OMA datasets
See Jaynes and Cox for Bayesian inference in general.
See Beck for Bayesian inference in structural dynamics (relevant for OMA)
The uncertainty of the modal parameters in OMA can also be quantified and calculated in a non-Bayesian manner. See Pintelon et al.
References
Wave mechanics | Bayesian operational modal analysis | Physics | 1,005 |
434,161 | https://en.wikipedia.org/wiki/Nephelinite | Nephelinite is a fine-grained or aphanitic igneous rock made up almost entirely of nepheline and clinopyroxene (variety augite). If olivine is present, the rock may be classified as an olivine nephelinite. Nephelinite is dark in color and may resemble basalt in hand specimen. However, basalt consists mostly of clinopyroxene (augite) and calcic plagioclase.
Basalt, alkali basalt, basanite, tephritic nephelinite, and nephelinite differ partly in the relative proportions of plagioclase and nepheline. Alkali basalt may contain minor nepheline and does contain nepheline in its CIPW normative mineralogy. A critical ratio in the classification of these rocks is the ratio nepheline/(nepheline plus plagioclase). Basanite has a value of this ratio between 0.1 and 0.6 and also contains more than 10% olivine. Tephritic nephelinite has a value between 0.6 and 0.9. Nephelinite has a value greater than 0.9. Le Maitre (2002) defines and discusses these and other criteria in the classification of igneous rocks.
Nephelinite is an example of a silica-undersaturated igneous rock. The degree of silica saturation can be evaluated with normative mineralogy calculated from chemical analyses, or with actual mineralogy for completely crystallized igneous rocks with equilibrated assemblages. Silica-oversaturated rocks contain quartz (or another silica polymorph). Silica-undersaturated mafic igneous rocks contain magnesian olivine but not magnesian orthopyroxene, and/or a feldspathoid. Silica-saturated igneous rocks fall in between these two classes.
Silica-undersaturated, mafic igneous rocks are much less abundant than silica-saturated and oversaturated basalts. Genesis of the less common mafic rocks such as nephelinite is usually ascribed to more than one of the following three causes:
relatively high pressure of melting;
relatively low degree of fractional melting in a mantle source;
relatively high dissolved carbon dioxide in the melt.
Nephelinites and similar rocks typically contain relatively high concentrations of elements such as the light rare earths, as consistent with a low degree of melting of mantle peridotite at depths sufficient to stabilize garnet. Nephelinites are also associated with carbonatite in some occurrences, consistent with source rocks relatively rich in carbon dioxide.
Nephelinite is found on ocean islands such as Oʻahu, although the rock type is very rare in the Hawaiian Islands. It is found in a variety of continental settings. An example is the Hamada nephelinite lava flow in southwest Japan which occurred in the late Miocene age. Nephelinite is also associated with the highly alkalic volcanism of the Ol Doinyo Lengai volcanic field in Tanzania. Nyiragongo, another African volcano known for its semipermanent lava lake activity, erupts lava made of melilite nephelinite. The unusual chemical makeup of this igneous rock may be a factor in the unusual fluidity of its lavas.
Olivine nephelinite flows also occur in the Wells Gray-Clearwater volcanic field in east-central British Columbia and at Volcano Mountain in central Yukon Territory. Melilite olivine nephelinite intrusives of Cretaceous age are found in the area around Uvalde, Texas.
References
Roger W. Le Maitre (Editor), Igneous Rocks: A Classification and Glossary of Terms. (Recommendations of the International Union of Geological Sciences Subcommission of the Systematics of Igneous Rocks). Cambridge University Press (2002).
Hamada nephelinite, SW Japan
Oldoinyo Lengai volcano, Eastern Rift Valley, North Tanzania
Daniel P. Miggins, Charles D. Blome, and David V. Smith. Preliminary Argon40/Argon39 geochronology of igneous intrusions from Uvalde County, Texas: Defining a more precise eruption history for the southern Balcones Volcanic Province. U.S. Geological Survey Open-File Report 2004-1031. In usgs.gov
Bibliography
External links
Nefelinita in - Atlas de rocas ígneas
Igneous petrology
Ultramafic rocks
Volcanic rocks | Nephelinite | Chemistry | 958 |
34,500,414 | https://en.wikipedia.org/wiki/Udacity | Udacity, Inc. is an American for-profit educational organization founded by Sebastian Thrun, David Stavens, and Mike Sokolsky offering massive open online courses.
According to Thrun, the origin of the name Udacity comes from the company's desire to be "audacious for you, the student". While it originally focused on offering university-style courses, it now focuses more on vocational courses for professionals.
Accenture agreed to acquire the company in March 2024.
History
Udacity is the outgrowth of free computer science classes offered in 2011 through Stanford University. Thrun has stated he hopes half a million students will enroll, after an enrollment of 160,000 students in the predecessor course at Stanford, Introduction to Artificial Intelligence, and 90,000 students had enrolled in the initial two classes . Udacity was announced at the 2012 Digital Life Design conference. Udacity is funded by venture capital firm, Charles River Ventures, and $200,000 of Thrun's personal money. In October 2012, the venture capital firm Andreessen Horowitz led the investment of another $15 million in Udacity. In November 2013, Thrun announced in a Fast Company article that Udacity had a "lousy product" and that the service was pivoting to focus more on vocational courses for professionals and "nanodegrees."
In 2014, the Georgia Institute of Technology launched the first "massive online open degree" in computer science by partnering with Udacity and AT&T; a complete master's degree through that program costs students $7,000.
In October 2017, Udacity along with Unity, launched ‘Learn ARKit’ program which could help developers improve their AR application building skills. In the same month, Google partnered with Udacity to launch a new scholarship initiative for aspiring Web and Android application developers.
While not yet profitable as of February 2018, Udacity was valued at over $1 billion having raised $163 million from noted investors included Andreessen Horowitz, Drive Capital, and Alphabet's venture capital arm, GV.
In March 2024, Accenture announced its acquisition of Udacity, which would help support its AI-powered LearnVantage suite, to equip clients with the resources to reskill and upskill their workforce.
Courses
Free courses
The first two courses on Udacity started on 20 February 2012, entitled "CS 101: Building a Search Engine", taught by David Evans from the University of Virginia, and "CS 373: Programming a Robotic Car" taught by Thrun. Both courses use Python.
Four more courses began on 16 April 2012, encompassing a range of ability and subject matter, with teachers including Steve Huffman and Peter Norvig. Five new courses were announced on 31 May 2012, and marked the first time Udacity offered courses outside the domain of computer science. Four of these courses launched at the start of the third "hexamester", on 25 June 2012. One course, Logic & Discrete Mathematics: Foundations of Computing, was delayed for several weeks before an email announcement was sent out on 14 August stating that the course would not be launched, although no further explanation was provided.
On 23 August 2012, a new course in entrepreneurship, EP245 taught by retired serial entrepreneur Steve Blank, was announced. Four new specialized CS courses were announced as part of collaboration with Google, Nvidia, Microsoft, Autodesk, Cadence Design Systems, and Wolfram Research on 18 October 2012, to be launched in early 2013. On 28 November 2012, Thrun's original AI-class from 2011 was relaunched as a course at Udacity, CS271.
University credit courses
Udacity announced a partnership with San Jose State University (SJSU) on 15 January 2013 to pilot three new courses—two algebra courses and an introductory statistics course (ST095)--available for college credit at SJSU for the Spring 2013 semester and offered entirely online. 300 SJSU students had the opportunity to enroll for 3 units of college credit at a fixed cost of $150. Additionally, like other MOOCs, anyone could enroll anytime for free.
This first pilot resulted in pass rates below the traditional in-person SJSU class for all three courses. One hypothesis was that many of the students who had enrolled online had already taken and failed the traditional course, and therefore were likely to fail again. The pilot was repeated in the summer semester with an increased enrollment cap of 1000. In addition, the pilot was expanded to include two new courses, Intro to Programming (CS046) and General Psychology (PS001). This time, pass rates for the statistics, college algebra, and programming courses exceeded those of the traditional face-to-face course.
Despite this, the partnership was suspended on 18 July 2013.
Nanodegree
In June 2014, Udacity and AT&T announced the "Nanodegree" program, designed to teach programming skills needed to qualify for an entry-level IT position at AT&T. The coursework is said to take less than a year to complete, and cost about US$200/month. AT&T said it will offer paid internships to some graduates of the program.“We can’t turn you into a Nobel laureate,” Mr. Thrun said to a learner. “But what we can do is something like upskilling — you’re a smart person, but the skills you have are inadequate for the current job market, or don’t let you get the job you aspire to have. We can help you get those skills.”
A cybersecurity nanodegree was announced at the RSA Conference in April 2018. As of the beginning of 2022, Udacity offered 78 nanodegrees.
Course format
Each course consists of several units comprising video lectures with closed captioning, in conjunction with integrated quizzes to help students understand concepts and reinforce ideas, as well as follow-up homework, which promotes a "learn by doing" model., Programming classes use the Python language; programming assignments are graded by automated grading programs on the Udacity servers.
Enrollment
Over the first several months of Udacity's existence, enrollment for each class was cut off on the due date of the first homework assignment, and the courses were re-offered each . Since August 2012, all courses have been "open enrollment"; students can enroll in one or more courses at any time after a course is launched. All course lectures and problem sets are available upon enrollment and can then be completed at the student's preferred pace.
Udacity had students in 203 countries in the summer of 2012, with the greatest number of students in the United States (42 percent), India (7 percent), Britain (5 percent), and Germany (4 percent). Udacity students for CS101 range from 13-year-olds to 80-year-olds. Advanced 13-year-olds are able to complete multiple, higher-level computer science courses on Udacity.
Certification
Udacity used to issue certificates of completion of individual courses, but since May 2014 has stopped offering free non-identity-verified certificates. In addition, beginning 24 August 2012, through partnership with electronic testing company Pearson VUE, students of CS101 can elect to take an additional proctored 75-minute final exam for a fee of $89 in an effort to allow Udacity classes to "count towards a credential that is recognized by employers".
Further plans announced for certification options would include a "secured online examination" as a less expensive alternative to the in-person proctored exams.
Colorado State University's Global Campus began offering transfer credit for the introductory computer science course (CS101) for Udacity students that take the final examination through a secure testing facility.
In 2015, Udacity started the Nanodegree program, it is a paid credential program. Udacity also offers Nanodegree plus, which is a bit more expensive, but guarantees a job, if they fail to provide a job, the course fee is returned, although it plans to cancel the program.
Awards
In November 2012, founder Sebastian Thrun won the Smithsonian American Ingenuity in Education Award for his work with Udacity.
Spin-off company
In April 2017, Udacity announced a spin-off venture called Voyage Auto, a self-driving car taxi company to compete with the likes of the Uber ride-hailing service. The company has been testing its project, based on production consumer vehicles, on low-speed private roads in a retirement community in San Jose, California. In 2018, Voyage announced a ride-hailing partnership with The Villages, Florida, another retirement community. In March 2021, Voyage was acquired by Cruise.
See also
FutureLearn
Coursera
Eliademy
edX
Khan Academy
LinkedIn Learning
Saylor Academy
TechChange
NIIT
Udemy
References
External links
2011 establishments in California
Computer science education
Educational technology companies of the United States
Education companies established in 2011
Internet properties established in 2011
American educational websites
Open educational resources
2024 mergers and acquisitions | Udacity | Technology | 1,879 |
35,871,521 | https://en.wikipedia.org/wiki/Muselet | A muselet () is a wire cage that fits over the cork of a bottle of champagne, sparkling wine or beer to prevent the cork from emerging under the pressure of the carbonated contents. It derives its name from the French museler, to muzzle. The muselet often has a metal cap incorporated in the design which may show the drink maker's emblem. They are normally covered by a metal foil envelope. Muselets are also known as wirehoods or Champagne wires.
History
When champagne was first produced the pressure of the sparkling wine was maintained by wooden plugs sealed with oil-cloth and wax. This method proved inconsistent either from leaking or blowing out of the stopper and a method of restraining the corks using cord was developed. In 1844, Adolphe Jacquesson invented the more secure method involving steel wire, however the early muselets were not easy to install and proved somewhat inconvenient to open. Further developments led to the modern muselet which is made of steel wire twisted to add strength and with a small loop of wire twisted into the lower ring which can be untwisted to release the pressure of the muselet and give access to the cork.
Cortellazzi (1952) of Marmirolo were the first to produce muselets (or wirehoods) in Italy, the brilliant idea of A. Jacquesson who resolved once and for all the problem of bottling sparkling wines: losing valuable effervescence through the cork. The Cortellazzi brothers (Otello and Evangelista), owners of an artisan shop that had started its business in iron making, the brothers later specialised in the manufacture of wirehoods for Spumante sparkling wine bottles thanks to their invention of the first machine that produces these components out of a single metal wire.
Modern muselets
Traditionally, muselets require six half-turns to open.
Muselets are now machine-made in millions. A modern development has seen the production of personalized caps within the muselet, which display the emblems or name of the manufacturer. These may vary in colour and design from year to year and between different manufacturers. This has stimulated a market for the collection of these caps.
References
External links
Wine packaging and storage
Bottles
Packaging
Seals (mechanical) | Muselet | Physics | 458 |
20,850,247 | https://en.wikipedia.org/wiki/Xie%20Xuejin | Xie Xuejin (; 21 May 1923 – 24 February 2017) was a Chinese geochemist who won the AAG Gold Medal in 2007. Xie was considered as the Father of Geochemical Mapping in China.
Biography
Xie was born 21 May 1923 in Beijing, with his ancestral home in Shanghai. He was the son of the geologist Xie Jiarong. From 1941 to 1945, Xie studied physics and chemistry at the College of Sciences of Zhejiang University. Xie also studied chemistry at Chongqing University.
After graduation, Xie mainly worked for the Institute of Geophysical and Geochemical Exploration of the Chinese Academy of Sciences (CAS). He was elected as an academician of CAS in 1980. Xie also taught as a professor at Changchun Geological College (later merged into Jilin University), Jilin Province, China.
Research activities
During the 1970s and 1980s, Xie proposed and led the National geochemical mapping project and the China-Regional Geochemistry-National Reconnaissance (RGNR) Project. He also technically sponsored these programs, and initially emphasized and demonstrated by mapping the gold mines in China's territory (see also Gold mining in China).
This RGNR project lasted for at least 24 years and scanned more than six million km2 of China's mainland surface. The useful information offered from this mega program also contributed to about 80% new mineral discoveries within recent two decades in China.
In the 1990s, Xie also contributed to the international standardization for methodology of world geochemical mapping.
Xie also further developed some new approaches of searching for buried giant ore deposits. Xie also proposed several deep-penetrating geochemical techniques and methods.
Academic assignments
Xie has served many academic positions including:
Leader, the Geochemical Exploration Research Division of the Institute of Geophysical and Geochemical Exploration, CAS.
Deputy director, the Institute of Geophysical and Geochemical Exploration, CAS.
Honorary director, the Institute of Geophysical and Geochemical Exploration, CAS.
Member of the executive committee of the IGCP Board, UNESCO.
Associate editor and the member of the editorial board of the Journal of Geochemical Exploration
Member of the editorial board of the Geochemistry: Exploration, Environment, Analysis
Member of the central executive committee and the chairman of the Analytical Technology Committee of the Global Geochemical Mapping Working Group, IUGS.
Honors and awards
Xie received many honors and awards both from China and international organizations, especially the prestigious AAG Gold Medal from the Association of Applied Geochemists in June, 2007 at the 23rd International Geochemical Exploration Symposium banquet in Oviedo, Spain, for "his outstanding scientific achievements in exploration geochemistry".
Monograph
Geochemistry: Proceedings of the 30th International Geological Congress, by Xie Xuejing (Editor); Page: 288; Publisher: Brill Academic Publishers (1 October 1997); Language: English; , .
References
External links
The Ho Leung Ho Lee Foundation: Biography of XIE Xuejin
Association of Applied Geochemists: Professor Xie Xuejing, the AAG Gold Medal awardee (Including photos)
News from China - International Seminar on Regional Exploration Geochemistry
AAG Gold Medal 2007 (Including photo)
1923 births
2017 deaths
Academic staff of Jilin University
Chemists from Beijing
Chinese geochemists
Chongqing University alumni
Educators from Beijing
Members of the Chinese Academy of Sciences
Zhejiang University alumni
20th-century Chinese chemists | Xie Xuejin | Chemistry | 690 |
7,206,776 | https://en.wikipedia.org/wiki/Origanum%20%C3%97%20hybridum | Origanum × hybridum, synonym Origanum × pulchellum, is an ornamental plant of hybrid origin. Its two parents are O. dictamnus and O. sipyleum. It is known as the showy marjoram or the showy oregano.
References
External links
DFT Digital Library - Vascular Plant Images: Origanum pulchellum
hybridum
Hybrid plants | Origanum × hybridum | Biology | 84 |
12,677,802 | https://en.wikipedia.org/wiki/7-cubic%20honeycomb | The 7-cubic honeycomb or hepteractic honeycomb is the only regular space-filling tessellation (or honeycomb) in Euclidean 7-space.
It is analogous to the square tiling of the plane and to the cubic honeycomb of 3-space.
There are many different Wythoff constructions of this honeycomb. The most symmetric form is regular, with Schläfli symbol {4,35,4}. Another form has two alternating 7-cube facets (like a checkerboard) with Schläfli symbol {4,34,31,1}. The lowest symmetry Wythoff construction has 128 types of facets around each vertex and a prismatic product Schläfli symbol {∞}(7).
Related honeycombs
The [4,35,4], , Coxeter group generates 255 permutations of uniform tessellations, 135 with unique symmetry and 134 with unique geometry. The expanded 7-cubic honeycomb is geometrically identical to the 7-cubic honeycomb.
The 7-cubic honeycomb can be alternated into the 7-demicubic honeycomb, replacing the 7-cubes with 7-demicubes, and the alternated gaps are filled by 7-orthoplex facets.
Quadritruncated 7-cubic honeycomb
A quadritruncated 7-cubic honeycomb, , contains all tritruncated 7-orthoplex facets and is the Voronoi tessellation of the D7* lattice. Facets can be identically colored from a doubled ×2, [[4,35,4]] symmetry, alternately colored from , [4,35,4] symmetry, three colors from , [4,34,31,1] symmetry, and 4 colors from , [31,1,33,31,1] symmetry.
See also
List of regular polytopes
References
Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, p. 296, Table II: Regular honeycombs
Kaleidoscopes: Selected Writings of H. S. M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
Honeycombs (geometry)
8-polytopes
Regular tessellations | 7-cubic honeycomb | Physics,Chemistry,Materials_science | 532 |
27,101,042 | https://en.wikipedia.org/wiki/Asimadoline | Asimadoline (EMD-61753) is an experimental drug which acts as a peripherally selective κ-opioid receptor (KOR) agonist. Because of its low penetration across the blood–brain barrier, asimadoline lacks the psychotomimetic effects of centrally acting KOR agonists, and consequently was thought to have potential for medical use. It has been studied as a possible treatment for irritable bowel syndrome, with reasonable efficacy seen in clinical trials, but it has never been approved or marketed.
See also
Eluxadoline
Fedotozine
Nalfurafine
Trimebutine
References
Synthetic opioids
Kappa-opioid receptor agonists
Pyrrolidines
Acetamides
Peripherally selective drugs
Abandoned drugs | Asimadoline | Chemistry | 159 |
64,527,214 | https://en.wikipedia.org/wiki/SDC%20335.579-0.292 | SDC 335.579-0.292 is a dark nebula in the constellation of Norma. It is about 7.8 light-years (2.4 parsecs) in size. Its distance is poorly known, but it is thought to be about 10,000 light-years (3.25 kiloparsecs) away.
SDC 335.579-0.292 is a site where stars are forming. It is one of the most massive such star-forming regions known, with a total mass of over 5,500 solar masses. Inside, there are two massive star-forming cores, one of which has an estimated mass of 545 solar masses. It is thought to be a potential precursor to massive OB associations and massive star clusters, like the famous Trapezium Cluster. It is claimed to live a lifetime of barely a million years.
References
Dark nebulae
Star-forming regions
Norma (constellation) | SDC 335.579-0.292 | Astronomy | 191 |
57,326,415 | https://en.wikipedia.org/wiki/List%20of%20unsolved%20problems%20in%20astronomy | This article is a list of notable unsolved problems in astronomy. Problems may be theoretical or experimental. Theoretical problems result from inability of current theories to explain observed phenomena or experimental results. Experimental problems result from inability to test or investigate a proposed theory. Other problems involve unique events or occurrences that have not repeated themselves with unclear causes.
Planetary astronomy
Our solar system
Orbiting bodies and rotation:
Are there any non-dwarf planets beyond Neptune?
Why do extreme trans-Neptunian objects have elongated orbits?
The rotation rate of Saturn:
Why does the magnetosphere of Saturn rotate at a rate close to that at which the planet's clouds rotate?
What is the rotation rate of Saturn's deep interior?
Satellite geomorphology:
What is the origin of the chain of high mountains that closely follows the equator of Saturn's moon, Iapetus?
Are the mountains the remnant of hot and fast-rotating young Iapetus?
Are the mountains the result of material (either from the rings of Saturn or its ring) that over time collected upon the surface?
Extra-solar
How common are Solar System-like planetary systems? Some observed planetary systems contain Super-Earths and Hot Jupiters that orbit very close to their stars. Systems with Jupiter-like planets in Jupiter-like orbits appear to be rare. There are several possibilities as to why Jupiter-like orbits are rare, including that data is lacking or the grand tack hypothesis.
Stellar astronomy and astrophysics
Solar cycle:
How does the Sun generate its periodically reversing large-scale magnetic field?
How do other Sol-like stars generate their magnetic fields, and what are the similarities and differences between stellar activity cycles and that of the Sun?
What caused the Maunder Minimum and other grand minima, and how does the solar cycle recover from a minimum state?
Coronal heating problem:
Why is the Sun's corona so much hotter than the Sun's surface?
Why is the magnetic reconnection effect many orders of magnitude faster than predicted by standard models?
Space weather prediction:
How does the Sun produce strong southward-pointing magnetic fields in solar coronal mass ejections that lead to geomagnetic storms? How can we predict solar and geomagnetic super-storms?
What is the origin of the stellar mass spectrum? That is, why do astronomers observe the same distribution of stellar masses—the initial mass function—apparently regardless of the initial conditions?
Supernova: What is the mechanism by which an implosion of a dying star becomes an explosion?
p-nuclei: What astrophysical process is responsible for the nucleogenesis of these rare isotopes?
Fast radio bursts (FRBs): What causes these transient radio pulses from distant galaxies, lasting a few milliseconds each? Why do some FRBs repeat at unpredictable intervals but many others do not? Several models have been proposed but no one theory has become widely accepted.
The Oh-My-God particle and other ultra-high-energy cosmic rays: What physical processes create cosmic rays whose energy exceeds the GZK cutoff?
Nature of KIC 8462852, commonly known as Tabby's Star: What is the origin of the unusual luminosity changes of this star?
Galactic astronomy and astrophysics
Galaxy rotation problem: Is dark matter (solely) responsible for differences in observed and theoretical speed of stars revolving around the center of galaxies?
Age-metallicity relation in the Galactic disk: Is there a universal age-metallicity relation (AMR) in the Galactic disk (both "thin" and "thick" parts of the disk)? In the local (primarily thin) disk of the Milky Way, there appears to be no evidence of a strong AMR. A sample of 229 nearby "thick" disk stars has been used to investigate the existence of an age-metallicity relation in the Galactic thick disk and indicates that there is an age-metallicity relation present in the thick disk. Stellar ages from asteroseismology confirm the lack of any strong age-metallicity relation in the Galactic disc.
Ultraluminous X-ray sources (ULXs): What powers X-ray sources that are not associated with active galactic nuclei but exceed the Eddington limit of a neutron star or stellar black hole? Are they due to intermediate-mass black holes? Some ULXs are periodic, suggesting non-isotropic emission from a neutron star. Does this apply to all ULXs? How could such a system form and remain stable?
What is the origin of the Galactic Center GeV excess? Is it due to the annihilation of dark matter particles or a new population of millisecond pulsars?
The infrared/TeV crisis: Lack of attenuation of very energetic gamma rays from extragalactic sources.
Black holes
Gravitational singularities: Does general relativity break down in the interior of a black hole due to quantum effects, torsion, or other phenomena?
No-hair theorem:
Do black holes have an internal structure? If so, how might the internal structure be probed?
Supermassive black holes:
What is the origin of the M–sigma relation between supermassive black hole mass and galaxy velocity dispersion?
The formation of high-redshift quasars:
How do the most distant quasars grow their supermassive black holes up to 1010 solar masses so early in the history of the universe (with redshift greater than 6 to 7)?
Black hole information paradox and black hole radiation:
Do black holes produce thermal radiation, as expected on theoretical grounds?
If so—meaning black holes can evaporate away—what happens to the information stored in them? This appears to be an issue because the unitarity of quantum mechanics does not allow for the destruction of information. Does the radiation stop at some point for black hole remnants?
Firewalls: Do firewalls exist around black holes?
Final parsec problem: Supermassive black holes appear to have merged, and what appears to be a pair in this intermediate range has been observed, in PKS 1302-102. However, theory predicts that when supermassive black holes reach a separation of about one parsec, it may take billions of years to orbit closely enough to merge—greater than the age of the universe.
Naked singularity: Is the cosmic censorship hypothesis correct? Does a naked singularity exist?
Cosmology
Cosmological principle:
Is the universe homogeneous and isotropic at sufficiently large scales, as claimed by the cosmological principle and assumed by all models that use the Friedmann–Lemaître–Robertson–Walker (FLRW) metric, including the current version of the ΛCDM model, or is the universe inhomogeneous or anisotropic?
Is the CMB dipole purely kinematic, or does it signal anisotropy of the universe, resulting in the breakdown of the FLRW metric and the cosmological principle?
Is the Hubble tension evidence that the cosmological principle is false?
If the cosmological principle is correct, is the FLRW metric the correct metric describing the universe?
Are the observations interpreted as the accelerating expansion of the universe correctly interpreted, or are they instead evidence that the cosmological principle is false?
Copernican principle: Are cosmological observations made from Earth representative of observations from the other positions in the universe?
Dark matter:
What is the identity and composition of dark matter?
Is dark matter a particle? If so, is it a WIMP, an axion, the lightest superpartner (LSP), or something else?
Do the phenomena attributed to dark matter point to an extension of gravity instead of some other type of matter?
Dark energy:
What causes the observed accelerating expansion of the universe (the de Sitter phase)?
Are the observations showing the accelerating expansion of the universe correctly interpreted, or are they evidence that the cosmological principle is false?
Why is the energy density of the dark energy component of the same magnitude as the density of matter at present when the two evolve quite differently over time? Could this observation be a coincidence of timing?
Is dark energy a pure cosmological constant or are models of quintessence such as phantom energy applicable?
Do early dark energy models resolve the Hubble tension?
Baryon asymmetry: Why is there far more matter than antimatter in the observable universe?
Cosmological constant problem:
Why does the zero-point energy of the vacuum not cause a large cosmological constant?
Size and shape of the universe:
The diameter of the observable universe is approximately 93 billion light-years; what is the size of the whole universe? Is it infinite?
What is the 3-manifold of comoving space, i.e. of a comoving spatial section of the universe, informally called the "shape" of the universe?
Neither the curvature nor the topology is presently known, though the curvature is known to be "close" to zero on observable scales. The cosmic inflation hypothesis suggests that the shape of the universe may be unmeasurable. Since 2003, Jean-Pierre Luminet, et al., and other groups have suggested that the shape of the universe may be the Poincaré dodecahedral space. Is the shape unmeasurable, the Poincaré space, or another 3-manifold?
Cosmic inflation:
Is the theory of cosmic inflation in the very early universe correct? If so, what are the details of this epoch?
What is the hypothetical scalar field that gave rise to this cosmic inflation?
If inflation happened at a single point, is it self-sustaining through inflation of quantum-mechanical fluctuations and thus ongoing in some extremely distant place?
Horizon problem:
Why is the distant universe so homogeneous when the Big Bang theory seems to predict larger measurable anisotropies of the night sky than those observed?
Cosmological inflation is generally accepted as the solution, but are other possible explanations such as a variable speed of light more appropriate?
Hubble tension: If ΛCDM is correct, why are measurements of the Hubble constant failing to converge?
Axis of evil: Some large features of the microwave sky at distances of over 13 billion light-years appear to be aligned with both the motion and orientation of the solar system. Is this due to systematic errors in processing, contamination of results by local effects, or an unexplained violation of the Copernican principle?
Why is there something rather than nothing? Origin and fate of the universe:
How did the conditions for anything to exist arise?
Is there potentially an infinite amount of unknown astronomical phenomena throughout our entire universe?
Is the universe heading toward a Big Freeze, a Big Rip, a Big Crunch, or a Big Bounce, or is it part of an infinitely recurring cyclic model?
Multiverse:
Is there a multiverse and is such a concept relevant? Are such ideas scientifically-testable or will they forever remain in the realm of pseudoscience? Are such metaphysical questions interpretable in the fields of cosmology, astronomy, physics, or any other scientific discipline?
Are metaphysical approaches such as the anthropic principle necessary to explain unsolved questions such as the cosmological constant problem?
Extraterrestrial life
Is there other life in the Universe? Especially:
Is there other intelligent life?
Is there potentially an infinite amount of extraterrestrial genera throughout our universe? If so, what is the explanation for the Fermi paradox?
Nature of Wow! signal:
Was this singular event a result of any extraterrestrial phenomenon? If so, what was its origin?
See also
Lists of unsolved problems
List of unsolved problems in physics
References
Astronomy-related lists
Astronomy | List of unsolved problems in astronomy | Physics,Astronomy | 2,402 |
35,566,863 | https://en.wikipedia.org/wiki/Angle-sensitive%20pixel | An angle-sensitive pixel (ASP) is a CMOS sensor with a sensitivity to incoming light that is sinusoidal in incident angle.
Principles of operation
ASPs are typically composed of two gratings (a diffraction grating and an analyzer grating) above a single photodiode. ASPs exploit the moire effect and the Talbot effect to gain their sinusoidal light sensitivity. According to the moire effect, if light acted as a particle, at certain incident angles the gaps in the diffraction and analyzer gratings line up, while at other incident angles light passed by the diffraction grating is blocked by the analyzer grating. The amount of light reaching the photodiode would be proportional to a sinusoidal function of incident angle, as the two gratings come in and out of phase with each other with shifting incident angle. The wave nature of light becomes important at small scales such as those in ASPs, meaning a pure-moire model of ASP function is insufficient. However, at half-integer multiples of the Talbot depth, the periodicity of the diffraction grating is recapitulated, and the moire effect is rescued. By building ASPs where the vertical separation between the gratings is approximately equal to a half-integer multiple of the Talbot depth, the sinusoidal sensitivity with incident angle is observed.
Applications
ASPs can be used in miniature imaging devices. They do not require any focusing elements to achieve sinusoidal incident angle sensitivity, meaning that they can be deployed without a lens to image the near field, or the far field using a Fourier-complete planar Fourier capture array. They can also be used in conjunction with a lens, in which case they perform a depth-sensitive, physics-based wavelet transform of the far-away scene, allowing single-lens 3D photography similar to that of the Lytro camera.
See also
Planar Fourier capture array
References
Image sensors
Integrated circuits | Angle-sensitive pixel | Technology,Engineering | 409 |
78,205,092 | https://en.wikipedia.org/wiki/Loewner%20energy | In complex analysis, the Loewner energy is an invariant of a domain in the complex plane, or equivalently an invariant of the boundary of the domain, a simple closed curve.
According to the uniformization theorem, every domain has a conformal mapping to one of three uniform Riemann surfaces: an open unit disk, the complex plane, or the Riemann sphere. In 1923 work on the Bieberbach conjecture, Charles Loewner showed that (a suitable normalization of) this uniform mapping can be described as the solution to the Loewner differential equation, which depends on a certain real-valued function, the driving function, defined on the boundary of the domain. The Loewner energy was originally defined by Yilin Wang and (independently) by Peter Friz and Atul Shekhar as the Dirichlet energy of this driving function. In later work, Wang found an equivalent definition of the Loewner energy as the Dirichlet energy of the logarithmic derivative of the conformal mapping itself.
This energy is bounded when the boundary of the domain is a Weil–Petersson curve, a kind of quasicircle obeying an additional smoothness condition.
References
Complex analysis | Loewner energy | Mathematics | 246 |
2,404,385 | https://en.wikipedia.org/wiki/Friis%20transmission%20equation | The Friis transmission formula is used in telecommunications engineering, equating the power at the terminals of a receive antenna as the product of power density of the incident wave and the effective aperture of the receiving antenna under idealized conditions given another antenna some distance away transmitting a known amount of power. The formula was presented first by Danish-American radio engineer Harald T. Friis in 1946. The formula is sometimes referenced as the Friis transmission equation.
Friis' original formula
Friis' original idea behind his transmission formula was to dispense with the usage of directivity or gain when describing antenna performance. In their place is the descriptor of antenna capture area as one of two important parts of the transmission formula that characterizes the behavior of a free-space radio circuit.
This leads to his published form of his transmission formula:
where:
is the power fed into the transmitting antenna input terminals;
is the power available at receiving antenna output terminals;
is the effective aperture area of the receiving antenna;
is the effective aperture area of the transmitting antenna;
is the distance between antennas;
is the wavelength of the radio frequency;
and are in the same units of power;
, , , and are in the same units.
Distance large enough to ensure a plane wave front at the receive antenna sufficiently approximated by where is the largest linear dimension of either of the antennas.
Friis stated the advantage of this formula over other formulations is the lack of numerical coefficients to remember, but does require the expression of transmitting antenna performance in terms of power flow per unit area instead of field strength and the expression of receiving antenna performance by its effective area rather than by its power gain or radiation resistance.
Contemporary formula
Few follow Friis' advice on using antenna effective area to characterize antenna performance over the contemporary use of directivity and gain metrics. Replacing the effective antenna areas with their gain counterparts yields
where and are the antenna gains (with respect to an isotropic radiator) of the transmitting and receiving antennas respectively, is the wavelength representing the effective aperture area of the receiving antenna, and is the distance separating the antennas.
To use the equation as written, the antenna gains are unitless values, and the units for wavelength () and distance () must be the same.
To calculate using decibels, the equation becomes:
where:
is the power delivered to the terminals of an isotropic transmit antenna, expressed in dB.
is the available power at the receive antenna terminals equal to the product of the power density of the incident wave and the effective aperture area of the receiving antenna proportional to , in dB.
is the gain of the transmitting antenna in the direction of the receiving antenna, in dB.
is the gain of the receiving antenna in the direction of the transmitting antenna, in dB.
The simple form applies under the following conditions:
, so that each antenna is in the far field of the other.
The antennas are correctly aligned and have the same polarization.
The antennas are in unobstructed free space, with no multipath propagation.
The bandwidth is narrow enough that a single value for the wavelength can be used to represent the whole transmission.
Directivities are both for isotropic radiators (dBi).
Powers are both presented in the same units: either both dBm or both dBW.
The ideal conditions are almost never achieved in ordinary terrestrial communications, due to obstructions, reflections from buildings, and most importantly reflections from the ground. One situation where the equation is reasonably accurate is in satellite communications when there is negligible atmospheric absorption; another situation is in anechoic chambers specifically designed to minimize reflections.
Derivation
There are several methods to derive the Friis transmission equation. In addition to the usual derivation from antenna theory, the basic equation also can be derived from principles of radiometry and scalar diffraction in a manner that emphasizes physical understanding. Another derivation is to take the far-field limit of the near-field transmission integral.
See also
Link budget
Radio propagation model
References
Further reading
Harald T. Friis, "A Note on a Simple Transmission Formula," Proceedings of the I.R.E. and Waves and Electrons, May, 1946, pp 254–256.
John D. Kraus, "Antennas," 2nd Ed., McGraw-Hill, 1988.
Kraus and Fleisch, "Electromagnetics," 5th Ed., McGraw-Hill, 1999.
D.M. Pozar, "Microwave Engineering." 2nd Ed., Wiley, 1998.
External links
Derivation of Friis Transmission Formula
Friis Transmission Equation Calculator
Another Friis Transmission Equation Calculator
Seminar Notes by Laasonen
Antennas
Radio frequency propagation model | Friis transmission equation | Engineering | 956 |
1,816,147 | https://en.wikipedia.org/wiki/The%20Invisible%20Enemy%20%28Doctor%20Who%29 | The Invisible Enemy is the second serial of the 15th season of the British science fiction television series Doctor Who, which was first broadcast in four weekly parts on BBC1 from 1 to 22 October 1977. The serial introduced the robot dog K9, voiced by John Leeson. In the serial, an intelligent virus intends to spread across the universe after finding a suitable spawning location on the moon Titan.
Plot
Some human space travellers are cruising near the outer planets of the solar system with their ship on autopilot. The TARDIS is travelling through the same region. The crews of both ships are infected by a sentient virus, which chooses The Doctor to be the host of its "mind," the Nucleus of the Swarm. The Nucleus declares Leela a reject and orders her killed. The Doctor manages to break free of his infection and tells Leela how to get the TARDIS to the nearest medical centre. At the medical station, the Doctor's doctor, Professor Marius, introduces the group to K9, a robotic dog he made to replace the real dog he had to leave on Earth.
Leela and the Doctor decide to create clones of themselves, which will then be shrunk and inserted into the Doctor. There they will destroy the Nucleus and escape through a tear duct. In the meantime, Leela and K9 fight off the infected staff of the hospital. The plan goes awry, allowing the Nucleus to escape and become human sized. The Nucleus and the infected staff leave for Titan Base so the Nucleus can spawn.
The Doctor realises he is cured since Leela's clone introduced her immunity factor into his bloodstream. He replicates it and gives it to Prof. Marius. The Doctor, Leela, and K9 proceed to Titan Base in the TARDIS.
They fight off the infected humans, but are again without sufficient weaponry to destroy the Nucleus, or its many children, which are about to hatch as "macro-sized" beings, like the newly macro-sized Nucleus. The Doctor jams the door they are behind and rigs a gun to fire into a cloud of oxygen gas he is releasing and escapes. As intended, when the Swarm finally forces open the door, the blaster fires, igniting the oxygen in Titan's methane atmosphere and destroying the Swarm and the base.
When they return to the hospital, they thank Prof. Marius for the use of K9, who has ably assisted them. Prof. Marius offers K9 to the Doctor, as he is due to return to Earth, and the Doctor and Leela leave with their new companion in the TARDIS.
Production
Working titles for this story included The Enemy Within, The Invader Within and The Invisible Invader. It was not decided until late in the production that K9 was to be a new companion. The decision to use it in multiple serials was made partly to offset the expense that had gone into making the prop.
The Invisible Enemy was filmed and recorded in April 1977. In one scene there is an obvious crack in a wall before it is fired at by K9; the crack was originally concealed, but the scene was reshot with little time left to repair the join.
Cast notes
Michael Sheard (Lowe) makes his fourth of six appearances in Doctor Who, having made previous appearances in The Ark (1966), The Mind of Evil (1971) and Pyramids of Mars (1975). Brian Grellis previously played Sheprah in Revenge of the Cybermen (1975) and would later appear as the Megaphone Man in Snakedance (1983). Frederick Jaeger (Marius) also played Jano in The Savages (1966) and Professor Sorenson in Planet of Evil (1975).
Broadcast and reception
The story was repeated on BBC1 on consecutive Thursdays from 13 July to 3 August 1978, achieving ratings of 4.9, 5.5, 5.1 and 6.8 million viewers, respectively.
Reviewing the serial for The Times newspaper on the Monday following the second episode's transmission, critic Stanley Reynolds gave the story a generally negative reception. He also pointed out that in ITV regions where the series was competing with Man from Atlantis in the Saturday early-evening slot, it was now losing the ratings war:
More recent reviews have also not been positive. Paul Cornell, Martin Day and Keith Topping wrote of the serial in The Discontinuity Guide (1995), "An ambitious project which has the look of a grand folly due to budget constraints and the tongue-in-cheek script... K9 makes a quite impressive debut, though, as with many aspects of The Invisible Enemy, the ideas are better than the realisation." In The Television Companion (1998), David J. Howe and Stephen James Walker called it one of the "weakest" Fourth Doctor stories, mostly consisting of "clichéd and undemanding action-adventure material". They also noted the inconsistent visual effects.
In 2010, Mark Braxton of Radio Times awarded it two stars out of five, contrasting it with the Philip Hinchcliffe era and describing it as "a kidified, Poundland Star Wars". He felt "many of the effects are excellent" but observed a "precarious juxtaposition" between good and bad effects and "the ambition of the serial as a whole". He praised the story as a "romping yarn" which "brings out the best in veteran designer Barry Newbery", but criticised "unbelievably incompetent" action scenes, as well as "harsh lighting" and "pristine white sets". He also commented on Louise Jameson as looking "unsurprisingly ill at ease" despite giving "her usual 100 per cent". DVD Talk's John Sinnott disliked the way K9 was used too conveniently and found the plot too similar to Fantastic Voyage (1966), but less well done. He praised the visual effects of the inside of the Doctor's head, but criticised the other sets.
Commercial Releases
In print
A novelisation of this serial, written by Terrance Dicks, was published by Target Books in March 1979.
Home media
The story was released on VHS in September 2002. The DVD was released on 16 June 2008 with the spin-off "K-9 and Company" in a double pack called "K9 Tales". This serial was scheduled to be released as part of the Doctor Who DVD Files in issue 133 on 5 February 2014. In March 2024, the story was released again in an upgraded format for Blu-ray, with new special effects, being included with the other stories from Season 15 in the Doctor Who - The Collection Box Set.
References
External links
Target novelisation
On Target — Doctor Who and the Invisible Enemy
Fourth Doctor serials
Television episodes about cloning
Fiction about size change
Fiction set on Titan (moon)
Doctor Who serials novelised by Terrance Dicks
1977 British television episodes
Television episodes written by Bob Baker (scriptwriter)
Fiction set in the 5th millennium
Television episodes set in hospitals | The Invisible Enemy (Doctor Who) | Physics,Mathematics | 1,424 |
62,877,206 | https://en.wikipedia.org/wiki/Auricular%20splint | An auricular splint (AS) or ear splint is a custom-made medical device that is used to maintain auricular projection and dimensions following second stage auricular reconstruction. The AS is made from ethylene-vinyl acetate (EVA), which is typically used to make custom-made mouthguards and was developed by a team from Great Ormond Street Hospital in the United Kingdom.
History
The auricle is typically reconstructed using autogenous cartilage, which is the most reliable material for producing the best results with the least complications. Cartilage from the knee and contralateral auricular cartilage from the concha have also been reported but costal cartilage is typically used as it is the only donor site that provides sufficient tissue to fabricate the complete auricular framework. The four main elements to consider when assessing the final reconstructed auricle are:
The symmetry of size of the auricle
The projection of the auricle
The adequacy of the temporoauricular sulcus (the depression behind the auricle next to the head)
The contour of the different subunits of the reconstructed auricle
In order to prevent compression during sleep and to prevent the grafted skin from contracting, the use of a Foley catheter, Reston Foam, silicone foam, polysiloxane and
dental impression compound have been described. The auricular splint was developed with the aim of overcoming the drawbacks associated with these methods.
Technique
The auricular splint (AS) is easy to fit and remove, self-retaining, lightweight and easy to camouflage due to its transparency. The AS is made from ethylene-vinyl acetate (EVA), which is inert and non-toxic, non-absorbent, sufficiently elastic to allow it to fitted and removed but sufficiently rigid to avoid breakage.
The concept was first presented at the 2nd Congress of the International Society for Auricular Reconstruction in Beijing, China in September 2017 and published in the Annals of Plastic Surgery the following year.
The first stage involves taking an impression of the reconstructed auricle with Soft Putty Elastomer, which is cast in dental stone to make a model of the reconstructed auricle. The splint is made by thermoforming a 4mm sheet of transparent ethylene-vinyl acetate (EVA) over the stone model. The edges of the splint are trimmed and polished using the outline on the model as a guide.
The splint has been found to maintain auricular projection and other key dimensions up to the six-month post-operative follow-up.
References
2017 in science
2017 introductions
Congenital disorders of ears
Ear
Ear procedures
Ear surgery
Medical devices
Plastic surgery | Auricular splint | Biology | 571 |
26,872,956 | https://en.wikipedia.org/wiki/Trope%20%28mathematics%29 | In geometry, trope is an archaic term for a singular (meaning special) tangent space of a variety, often a quartic surface. The term may have been introduced by , who defined it as "the reciprocal term to node". It is not easy to give a precise definition, because the term is used mainly in older books and papers on algebraic geometry, whose definitions are vague and different, and use archaic terminology. The term trope is used in the theory of quartic surfaces in projective space, where it is sometimes defined as a tangent space meeting the quartic surface in a conic; for example Kummer's surface has 16 tropes.
, describes a trope as a tangent plane where the envelope of nearby tangent planes forms a conic, rather than a plane pencil which we would expect for a generic point. The tangent plane would be tangent to the quartic along the conic, implying that the Gauss map would have a singular point.
See also
Glossary of classical algebraic geometry
References
See page 202 for an early use of the term "trope".
Algebraic geometry | Trope (mathematics) | Mathematics | 228 |
4,352,963 | https://en.wikipedia.org/wiki/List%20of%20Balzan%20Prize%20recipients | This is a list of recipients of the Balzan Prize, one of the world's most prestigious academic awards. The International Balzan Prize Foundation awards four annual monetary prizes to people or organizations who have made outstanding achievements in the humanities, natural sciences, culture, and peace on an international level. The Prizes are awarded in four subject areas: "two in literature, the moral sciences and the arts" and "two in the physical, mathematical and natural sciences and medicine." The special Prize for Humanity, Peace and Fraternity is presented at intervals of every three years or longer.
1960s–1970s
1961
Nobel Foundation (Sweden) --- Humanity, peace and brotherhood among peoples
1962
Andrey Kolmogorov (Soviet Union) --- Mathematics
Karl von Frisch (Austria) --- Biology
Paul Hindemith (Germany) --- Music
Samuel Eliot Morison (United States) --- History
Pope John XXIII (Vatican) --- Humanity, peace and brotherhood among peoples
1978
Mother Teresa of Calcutta (India) --- Humanity, peace and brotherhood among peoples
1979
Ernest Labrousse (France) and Giuseppe Tucci (Italy) --- History
Jean Piaget (Switzerland) --- Social and political sciences
Torbjörn Caspersson (Sweden) --- Biology
1980s
1980
Enrico Bombieri (Italy) --- Mathematics
Hassan Fathy (Egypt) --- Architecture and town planning
Jorge Luis Borges (Argentina) --- Philology, linguistics and literary criticism
1981
Dan McKenzie (United Kingdom), Drummond Matthews (United Kingdom) and Frederick Vine (United Kingdom) --- Geology and geophysics
Josef Pieper (Germany) --- Philosophy
Paul Reuter (France) --- International public law
1982
Jean-Baptiste Duroselle (France) --- Social sciences
Kenneth Vivian Thimann (United Kingdom / United States) --- Pure and applied botany
Massimo Pallottino (Italy) --- Sciences of antiquity
1983
Edward Shils (United States) --- Sociology
Ernst Mayr (Germany / United States) --- Zoology
Francesco Gabrieli (Italy) --- Oriental studies
1984
Jan Hendrik Oort (Netherlands) --- Astrophysics
Jean Starobinski (Switzerland) --- History and criticism of the literatures
Sewall Wright (United States) --- Genetics
1985
Ernst H. J. Gombrich (Austria / United Kingdom) --- History of western art
Jean-Pierre Serre (France) --- Mathematics
1986
(France) --- Basic human rights
Otto Neugebauer (Austria / United States) --- History of science
Roger Revelle (United States) --- Oceanography / climatology
United Nations High Commissioner for Refugees (UNHCR) --- Humanity, peace and brotherhood among peoples
1987
Jerome Seymour Bruner (United States) --- Human psychology
Phillip V. Tobias (South Africa) --- Physical anthropology
Richard W. Southern (United Kingdom) --- Medieval history
1988
Michael Evenari (Israel) and Otto Ludwig Lange (Germany) --- Applied botany (incl. ecological aspects)
René Étiemble (France) --- Comparative literature
Shmuel Noah Eisenstadt (Israel) --- Sociology
1989
Emmanuel Lévinas (France / Lithuania) --- Philosophy
Leo Pardi (Italy) --- Ethologie
Martin John Rees (United Kingdom) --- High energy astrophysics
1990s
1990
James Freeman Gilbert (United States) --- Geophysics (solid earth)
Pierre Lalive d'Epinay (Switzerland) --- Private international law
Walter Burkert (Germany) --- Study of the ancient world (Mediterranean area)
1991
György Ligeti (Hungary / Austria) --- Music
John Maynard Smith (United Kingdom) --- Genetics and evolution
Vitorino Magalhães Godinho (Portugal) --- History: The emergence of Europe in the 15th and 16th centuries
Abbé Pierre (Henri Grouès) (France) --- Humanity, peace and brotherhood among peoples
1992
Armand Borel (Switzerland) --- Mathematics
(Gambia) --- Preventive medicine
Giovanni Macchia (Italy) --- History and criticism of the literatures
1993
Jean Leclant (France) --- Art and archaeology of the ancient world
Lothar Gall (Germany) --- History: societies of the 19th and 20th centuries
Wolfgang H. Berger (Germany / United States) --- Paleontology with special reference to oceanography
1994
Fred Hoyle (United Kingdom) and Martin Schwarzschild (Germany / United States) --- Astrophysics (evolution of stars)
Norberto Bobbio (Italy) --- Law and political science (governments and democracy)
(France) --- Biology (cell structure with special reference to the nervous system)
1995
Alan J. Heeger (United States) --- Science of new non-biological materials
Carlo M. Cipolla (Italy) --- Economic history
Yves Bonnefoy (France) --- Art history and art criticism (as applied to European art from the Middle Ages to our times)
1996
(Germany) --- History: medieval cultures
Arnt Eliassen (Norway) --- Meteorology
Stanley Hoffmann (Austria / United States / France) --- Political sciences: contemporary international relations
International Committee of the Red Cross (ICRC) --- Humanity, peace and brotherhood among peoples
1997
Charles Coulston Gillispie (United States) --- History and philosophy of science
Stanley Jeyaraja Tambiah (Sri Lanka / United States) --- Social sciences: social anthropology
Thomas Wilson Meade (United Kingdom) --- Epidemiology
1998
Andrzej Walicki (Poland / United States) --- History: the cultural and social history of the Slavonic world from the reign of Catherine the Great to the Russian revolutions of 1917
Harmon Craig (United States) --- Geochemistry
Robert McCredie May (United Kingdom / Australia) --- Biodiversity
1999
John Elliott (United Kingdom) --- History 1500-1800
Luigi Luca Cavalli-Sforza (Italy / United States) --- Science of human origins
Mikhail Gromov (Russia / France) --- Mathematics
Paul Ricœur (France) --- Philosophy
2000s
2000
Ilkka Hanski (Finland) --- Ecological sciences
Martin Litchfield West (United Kingdom) --- Classical antiquity
Michael Stolleis (Germany) --- Legal history since 1500
Michel G.E. Mayor (Switzerland) --- Instrumentation and techniques in astronomy and astrophysics
Abdul Sattar Edhi (Pakistan) --- Humanity, peace and brotherhood among peoples
2001
Claude Lorius (France) --- Climatology
James Sloss Ackerman (United States) --- History of architecture (including town planning and landscape design)
Jean-Pierre Changeux (France) --- Cognitive neurosciences
Marc Fumaroli (France) --- Literary history and criticism (post 1500)
2002
Anthony Grafton (United States) --- History of the humanities
Dominique Schnapper (France) --- Sociology
Walter J. Gehring (Switzerland) --- Developmental biology
Xavier Le Pichon (France) --- Geology
2003
Eric Hobsbawm (United Kingdom) --- European history since 1900
Reinhard Genzel (Germany) --- Infrared astronomy
Serge Moscovici (France) --- Social psychology
Wen-Hsiung Li (Taiwan / United States) --- Genetics and evolution
2004
Andrew Colin Renfrew (United Kingdom) --- Prehistoric Archaeology
Michael Marmot (United Kingdom) --- Epidemiology
Nikki R. Keddie (United States) --- The Islamic world from the end of the 19th to the end of the 20th century
Pierre Deligne (Belgium) --- Mathematics
Community of Sant'Egidio --- Humanity, peace and brotherhood among peoples
2005
Lothar Ledderose (Germany) --- History of the art of Asia
Peter Hall (United Kingdom) --- The social and cultural history of cities since the beginning of the 16th century
Peter R. Grant (United Kingdom) and Rosemary Grant (United States) --- Population biology
Russell J. Hemley (United States) and Ho-kwang (David) Mao (China) --- Mineral physics
2006
Ludwig Finscher (Germany) --- History of western music since 1600
Quentin Skinner (United Kingdom) --- Political thought: history and theory
Andrew Lange (United States) and (Italy) --- Observational astronomy and astrophysics
Elliott M. Meyerowitz (United States) and Christopher R. Somerville (Canada) --- Plant molecular genetics
2007
Sumio Iijima (Japan) --- Nanoscience
Bruce A. Beutler (United States) and Jules A. Hoffmann (France) --- Innate Immunity
Michel Zink (France) --- European Literature (1000 - 1500)
Rosalyn Higgins (United Kingdom) --- International Law since 1945
Karlheinz Böhm (Austria) --- Humanity, peace and brotherhood among peoples
2008
Maurizio Calvesi (Italy) --- The Visual Arts since 1700
Thomas Nagel (Serbia / United States) --- Moral Philosophy
Ian H. Frazer (Australia) --- Preventive Medicine, including Vaccination
Wallace S. Broecker (United States) --- Science of Climate Change
2009
Terence Cave (United Kingdom) --- Literature since 1500
Michael Grätzel (Germany / Switzerland) --- Science of New Materials
Brenda Milner (United Kingdom / Canada) --- Cognitive Neurosciences
Paolo Rossi Monti (Italy) --- History of Science
2010s
2010
(Germany) --- History of theatre in all its aspects
Carlo Ginzburg (Italy) --- European History (1400 - 1700)
Jacob Palis (Brazil) --- Mathematics (pure or applied)
Shinya Yamanaka (Japan) --- Stem Cells: Biology and potential applications
2011
Peter Brown (Ireland) --- Ancient History (The Graeco-Roman World)
Bronislaw Baczko (Poland) --- Enlightenment Studies
Russell Scott Lande (United States / United Kingdom) --- Theoretical Biology or Bioinformatics
Joseph Ivor Silk (United States / United Kingdom) --- The Early Universe (From the Planck Time to the First Galaxies)
2012
Ronald Dworkin (United States) --- Jurisprudence
Reinhard Strohm (Germany) --- Musicology
Kurt Lambeck (Australia) --- Solid Earth Sciences, with emphasis on interdisciplinary research
David Baulcombe (United Kingdom) --- Epigenetics
2013
André Vauchez (France) --- Medieval History
Manuel Castells (Spain) --- Sociology
Alain Aspect (France) --- Quantum Information Processing and Communication
Pascale Cossart (France) --- Infectious diseases: basic and clinical aspects
2014
Mario Torelli (Italy) --- Classical Archaeology
Ian Hacking (Canada) --- Epistemology and Philosophy of Mind
G. David Tilman (United States) --- Basic and/or applied Plant Ecology
Dennis Sullivan (United States) --- Mathematics (pure or applied)
Vivre en Famille (France) --- Humanity, peace and brotherhood among peoples
2015
Hans Belting (Germany) --- History of European Art (1300-1700)
Joel Mokyr (Netherland / United States / Israel) --- Economic History
Francis Halzen (Belgium / United States) --- Astroparticle Physics including neutrino and gamma-ray observation
David Karl (United States) --- Oceanography
2016
Piero Boitani (Italy) --- Comparative Literature
Reinhard Jahn (Germany) --- Molecular and Cellular Neuroscience, including neurodegenerative and developmental aspects
Federico Capasso (Italy) --- Applied Photonics
Robert Keohane (United States) --- International Relations: History and Theory
2017
Aleida Assmann (Germany) and Jan Assmann (Germany) --- Collective Memory
Bina Agarwal (India / United Kingdom) --- Gender Studies
Robert D. Schreiber (United States) and James P. Allison (United States) --- Immunological Approaches in Cancer Therapy
Michaël Gillon (Belgium) --- The Sun's Planetary System and Exoplanets
2018
Éva Kondorosi (Hungary / France) --- Chemical Ecology
Detlef Lohse (Germany) --- Fluid Dynamics
Jürgen Osterhammel (Germany) --- Global History
Marilyn Strathern (United Kingdom) --- Social Anthropology
Terre des hommes Foundation (Switzerland) --- Humanity, Peace and Fraternity among Peoples
2019
Jacques Aumont (France) --- Film Studies
Michael Cook (United States / United Kingdom) --- Islamic Studies
Luigi Ambrosio (Italy) --- Theory of Partial Differential Equations
Erika von Mutius, , and (all Germany) --- Pathophysiology of respiration: from basic sciences to the bedside
2020s
2020
Susan Trumbore (US / Germany) --- Earth System Dynamics
Jean-Marie Tarascon (France) --- Environmental Challenges: Materials Science for Renewable Energy
Joan Martinez Alier (Spain) --- Environmental Challenges: Responses from the Social Sciences and the Humanities
Antônio Augusto Cançado Trindade (Brazil) --- Human Rights
2021
Saul Friedländer (France / US) --- Holocaust and Genocide Studies
Jeffrey I. Gordon (US) --- Microbiome in Health and Disease
Alessandra Buonanno (Italy / US) and Thibault Damour (France) --- Gravitation: physical and astrophysical aspects
Giorgio Buccellati and Marilyn Kelly-Buccellati (Italy / USA) --- Art and Archaeology of the Ancient Far East
2022
Robert Langer (US) --- Biomaterials for Nanomedicine and Tissue Engineering
Martha Nussbaum (US) --- Moral Philosophy
Dorthe Dahl-Jensen (Denmark) and Hans Oerlemans (Netherlands) --- Glaciation and Ice-Sheet Dynamics
Philip Bohlman (US) --- Ethnomusicology
2023
David Damrosch (US) --- World Literature
Jean-Jacques Hublin (France) --- Evolution of Humankind: Paleoanthropology
Eske Willerslev (Denmark) --- Evolution of Humankind: Ancient DNA and Human Evolution
Heino Falcke (Germany) --- High resolution images: from planetary to cosmic objects
2024
John Braithwaite (Australia) --- Restorative Justice
Lorraine Daston (US / Germany) --- History of Modern and Contemporary Science
Michael N. Hall (US / Switzerland) --- Biological Mechanisms of Ageing
Omar M. Yaghi (US) --- Nanoporous Materials for Environmental Applications
References
External links
Science-related lists
Science and technology awards
Lists of award winners
Awards established in 1961 | List of Balzan Prize recipients | Technology | 3,154 |
9,423,860 | https://en.wikipedia.org/wiki/Theory-theory | The theory-theory (or theory theory) is a scientific theory relating to the human development of understanding about the outside world. This theory asserts that individuals hold a basic or 'naïve' theory of psychology ("folk psychology") to infer the mental states of others, such as their beliefs, desires or emotions. This information is used to understand the intentions behind that person's actions or predict future behavior. The term 'perspective taking' is sometimes used to describe how one makes inferences about another person's inner state using theoretical knowledge about the other's situation.
This approach has become popular with psychologists as it gives a basis from which to explore human social understanding. Beginning in the mid-1980s, several influential developmental psychologists began advocating the theory theory: the view that humans learn through a process of theory revision closely resembling the way scientists propose and revise theories. Children observe the world, and in doing so, gather data about the world's true structure. As more data accumulates, children can revise their naive theories accordingly. Children can also use these theories about the world's causal structure to make predictions, and possibly even test them out. This concept is described as the 'Child Scientist' theory, proposing that a series of personal scientific revolutions are required for the development of theories about the outside world, including the social world.
In recent years, proponents of Bayesian learning have begun describing the theory theory in a precise, mathematical way.
The concept of Bayesian learning is rooted in the assumption that children and adults learn through a process of theory revision; that is, they hold prior beliefs about the world but, when receiving conflicting data, may revise these beliefs depending upon their strength.
Child development
Theory-theory states that children naturally attempt to construct theories to explain their observations. As all humans do, children seek to find explanations that help them understand their surroundings. They learn through their own experiences as well as through their observations of others' actions and behaviors.
Through their growth and development, children will continue to form intuitive theories; revising and altering them as they come across new results and observations. Several developmentalists have conducted research of the progression of their theories, mapping out when children start to form theories about certain subjects, such as the biological and physical world, social behaviors, and others' thoughts and minds ("theory of mind"), although there remains controversies over when these shifts in theory-formation occur.
As part of their investigative process, children often ask questions, frequently posing "Why?" to adults, not seeking a technical and scientific explanation but instead seeking to investigate the relation of the concept in question to themselves, as part of their egocentric view. In a study where Mexican-American mothers were interviewed over a two-week period about the types of questions their preschool children ask, researchers discovered that the children asked their parents more about biology and social behaviors rather than nonliving objects and artifacts. In their questions, the children were mostly ambiguous, unclear if they desired an explanation of purpose or cause. Although parents will usually answer with a causal explanation, some children found the answers and explanations inadequate for their understanding, and as a result, they begin to create their own theories, particularly evident in children's understanding of religion.
This theory also plays a part in Vygotsky's social learning theory, also called modeling. Vygotsky claims that humans, as social beings, learn and develop by observing others' behavior and imitating them. In this process of social learning, prior to imitation, children will first post inquiries and investigate why adults act and behave in a particular way. Afterwards, if the adult succeeds at the task, the child will likely copy the adult, but if the adult fails, the child will choose not to follow the example.
Comparison with other theories
Theory of mind (ToM)
Theory-theory is closely related to theory of mind (ToM), which concerns mental states of people, but differs from ToM in that the full scope of theory-theory also concerns mechanical devices or other objects, beyond just thinking about people and their viewpoints.
Simulation theory
In the scientific debate in mind reading, theory-theory is often contrasted with simulation theory, an alternative theory which suggests simulation or cognitive empathy is integral to our understanding of others.
References
Cognitive psychology
Child development | Theory-theory | Biology | 869 |
578,631 | https://en.wikipedia.org/wiki/High-bandwidth%20Digital%20Content%20Protection | High-bandwidth Digital Content Protection (HDCP) is a form of digital copy protection developed by Intel Corporation to prevent copying of digital audio and video content as it travels across connections. Types of connections include DisplayPort (DP), Digital Visual Interface (DVI), and High-Definition Multimedia Interface (HDMI), as well as less popular or now deprecated protocols like Gigabit Video Interface (GVIF) and Unified Display Interface (UDI).
The system is meant to stop HDCP-encrypted content from being played on unauthorized devices or devices which have been modified to copy HDCP content. Before sending data, a transmitting device checks that the receiver is authorized to receive it. If so, the transmitter encrypts the data to prevent eavesdropping as it flows to the receiver.
In order to make a device that plays HDCP-enabled content, the manufacturer must obtain a license for the patent from Intel subsidiary Digital Content Protection LLC, pay an annual fee, and submit to various conditions. For example, the device cannot be designed to copy; it must "frustrate attempts to defeat the content protection requirements"; it must not transmit high definition protected video to non-HDCP receivers; and DVD-Audio works can be played only at CD-audio quality by non-HDCP digital audio outputs (analog audio outputs have no quality limits). If the device has a feature like Intel Management Engine disabled, HDCP will not work.
Cryptanalysis researchers demonstrated flaws in HDCP as early as 2001. In September 2010, an HDCP master key that allows for the generation of valid device keys was released to the public, rendering the key revocation feature of HDCP useless. Intel has confirmed that the crack is real, and believes the master key was reverse engineered rather than leaked. In practical terms, the impact of the crack has been described as "the digital equivalent of pointing a video camera at the TV", and of limited importance for consumers because the encryption of high-definition discs has been attacked directly, with the loss of interactive features like menus. Intel threatened to sue anyone producing an unlicensed device.
Specification
HDCP uses three systems:
Authentication prevents non-licensed devices from receiving content.
Encryption of the data sent over DisplayPort, DVI, HDMI, GVIF, or UDI interfaces prevents eavesdropping of information and man-in-the-middle attacks.
Key revocation prevents devices that have been compromised and cloned from receiving data.
Each HDCP-capable device has a unique set of 40 56-bit keys. Failure to keep them secret violates the license agreement. For each set of values, a special private key called a KSV (Key Selection Vector) is created. Each KSV consists of 40 bits (one bit for each HDCP key), with 20 bits set to 0 and 20 bits set to 1.
During authentication, the parties exchange their KSVs under a procedure called Blom's scheme. Each device adds its own secret keys together (using unsigned addition modulo 256) according to a KSV received from another device. Depending on the order of the bits set to 1 in the KSV, a corresponding secret key is used or ignored in the addition. The generation of keys and KSVs gives both devices the same 56-bit number, which is later used to encrypt data.
Encryption is done by a stream cipher. Each decoded pixel is encrypted by applying an XOR operation with a 24-bit number produced by a generator. The HDCP specifications ensure constant updating of keys after each encoded frame.
If a particular set of keys is compromised, their corresponding KSV is added to a revocation list burned onto new discs in the DVD and Blu-ray formats. (The lists are signed with a DSA digital signature, which is meant to keep malicious users from revoking legitimate devices.) During authentication, the transmitting device looks for the receiver's KSV on the list, and if it is there, will not send the decrypted work to the revoked device.
Uses
HDCP devices are generally divided into three categories:
Source The source sends the content to be displayed. Examples include set-top boxes, DVD, HD DVD and Blu-ray Disc players, and computer video cards. A source has only an HDCP/HDMI transmitter.
Sink The sink renders the content for display so it can be viewed. Examples include TVs and digital projectors. A sink has one or more HDCP/HDMI receivers.
Repeater A repeater accepts content, decrypts it, then re-encrypts and retransmits the data. It may perform some signal processing, such as upconverting video into a higher-resolution format, or splitting out the audio portion of the signal. Repeaters have HDMI inputs and outputs. Examples include home theater audio-visual receivers that separate and amplify the audio signal, while re-transmitting the video for display on a TV. A repeater could also simply send the input data stream to multiple outputs for simultaneous display on several screens.
Each device may contain one or more HDCP transmitters and/or receivers. (A single transmitter or receiver chip may combine HDCP and HDMI functionality.)
In the United States, the Federal Communications Commission (FCC) approved HDCP as a "Digital Output Protection Technology" on 4 August 2004. The FCC's Broadcast flag regulations, which were struck down by the United States Court of Appeals for the District of Columbia Circuit, would have required DRM technologies on all digital outputs from HDTV signal demodulators. Congress is still considering legislation that would implement something similar to the Broadcast Flag. The HDCP standard is more restrictive than the FCC's Digital Output Protection Technology requirement. HDCP bans compliant products from converting HDCP-restricted content to full-resolution analog form, presumably in an attempt to reduce the size of the analog hole.
On 19 January 2005, the European Information, Communications, and Consumer Electronics Technology Industry Associations (EICTA) announced that HDCP is a required component of the European "HD ready" label.
Microsoft Windows Vista and Windows 7 both use HDCP in computer graphics cards and monitors.
Circumvention
HDCP strippers decrypt the HDCP stream and transmit an unencrypted HDMI video signal so it will work in a non-HDCP display. It is currently unclear whether such devices would remain working if the HDCP licensing body issued key-revocation lists, which may be installed via new media (e.g. newer Blu-ray Discs) played-back by another device (e.g. a Blu-ray Disc player) connected to it.
Cryptanalysis
In 2001, Scott Crosby of Carnegie Mellon University wrote a paper with Ian Goldberg, Robert Johnson, Dawn Song, and David Wagner called "A Cryptanalysis of the High-bandwidth Digital Content Protection System", and presented it at ACM-CCS8 DRM Workshop on 5 November.
The authors concluded that HDCP's linear key exchange is a fundamental weakness, and discussed ways to:
Eavesdrop on any data.
Clone any device with only its public key.
Avoid any blacklist on devices.
Create new device key vectors.
In aggregate, usurp the authority completely.
They also said the Blom's scheme key swap could be broken by a so-called conspiracy attack: obtaining the keys of at least 40 devices and reconstructing the secret symmetrical master matrix that was used to compute them.
Around the same time, Niels Ferguson independently claimed to have broken the HDCP scheme, but he did not publish his research, citing legal concerns arising from the controversial Digital Millennium Copyright Act.
In November 2011 Professor Tim Güneysu of Ruhr-Universität Bochum revealed he had broken the HDCP 1.3 encryption standard.
Master key release
On 14 September 2010, Engadget reported the release of a possible genuine HDCP master key which can create device keys that can authenticate with other HDCP compliant devices without obtaining valid keys from The Digital Content Protection LLC. This master key would neutralize the key revocation feature of HDCP, because new keys can be created when old ones are revoked. Since the master key is known, it follows that an unlicensed HDCP decoding device could simply use the master key to dynamically generate new keys on the fly, making revocation impossible. It was not immediately clear who discovered the key or how they discovered it, though the discovery was announced via a Twitter update which linked to a Pastebin snippet containing the key and instructions on how to use it. Engadget said the attacker may have used the method proposed by Crosby in 2001 to retrieve the master key, although they cited a different researcher. On 16 September, Intel confirmed that the code had been cracked. Intel has threatened legal action against anyone producing hardware to circumvent the HDCP, possibly under the Digital Millennium Copyright Act.
HDCP v2.2, v2.1 and v2.0 breach
In August 2012 version 2.1 was proved to be broken. The attack used the fact that the pairing process sends the Km key obfuscated with an XOR. That makes the encryptor (receiver) unaware of whether it encrypts or decrypts the key. Further, the input parameters for the XOR and the AES above it are fixed from the receiver side, meaning the transmitter can enforce repeating the same operation. Such a setting allows an attacker to monitor the pairing protocol, repeat it with a small change and extract the Km key. The small change is to pick the "random" key to be the encrypted key from the previous flow. Now, the attacker runs the protocol and in its pairing message it gets E(E(Km)). Since E() is based on XOR it undoes itself, thus exposing the Km of the legitimate device.
V2.2 was released to fix that weakness by adding randomness provided by the receiver side. However the transmitter in V2.2 must not support receivers of V2.1 or V2.0 in order to avoid this attack. Hence a new erratum was released to redefine the field called "Type" to prevent backward compatibility with versions below 2.2. The "Type" flag should be requested by the content's usage rules (i.e. via the DRM or CAS that opened the content).
In August 2015, version 2.2 was rumored to be broken. An episode of AMC's series Breaking Bad was leaked to the Internet in UHD format; its metadata indicated it was an HDMI cap, meaning it was captured through HDMI interface that removed HDCP 2.2 protection.
On 4 November 2015, Chinese company LegendSky Tech Co., already known for their other HDCP rippers/splitters under the HDFury brand, released the HDFury Integral, a device that can remove HDCP 2.2 from HDCP-enabled UHD works. On 31 December 2015, Warner Bros and Digital Content Protection, LLC (DCP, the owners of HDCP) filed a lawsuit against LegendSky. Nevertheless, the lawsuit was ultimately dropped after LegendSky argued that the device did not "strip" HDCP content protection but rather downgraded it to an older version, a measure which is explicitly permitted in DCP's licensing manual.
Problems
HDCP can cause problems for users who want to connect multiple screens to a device; for example, a bar with several televisions connected to one satellite receiver or when a user has a closed laptop and uses an external display as the only monitor. HDCP devices can create multiple keys, allowing each screen to operate, but the number varies from device to device; e.g., a Dish or Sky satellite receiver can generate 16 keys. The technology sometimes causes handshaking problems where devices cannot establish a connection, especially with older high-definition displays.
Edward Felten wrote "the main practical effect of HDCP has been to create one more way in which your electronics could fail to work properly with your TV," and concluded in the aftermath of the master key fiasco that HDCP has been "less a security system than a tool for shaping the consumer electronics market."
Additional issues arise when interactive media (i.e. video games) suffer from control latency, because it requires additional processing for encoding/decoding. Various everyday usage situations, such as live streaming or capture of game play, are also adversely affected.
There is also the problem that all Apple laptop products, presumably in order to reduce switching time, when confronted with an HDCP-compliant sink device, automatically enable HDCP encryption from the HDMI / Mini DisplayPort / USB-C connector port. This is a problem if the user wishes to use recording or videoconferencing facilities further down the chain, because these devices most often do not decrypt HDCP-enabled content (since HDCP is meant to avoid direct copying of content, and such devices could conceivably do exactly that). This applies even if the output is not HDCP-requiring content, like a PowerPoint presentation or merely the device's UI. Some sink devices have the ability to disable their HDCP reporting entirely, however, preventing this issue from blocking content to videoconferencing or recording. However, HDCP content will then refuse to play on many source devices if this is disabled while the sink device is connected.
When connecting a HDCP 2.2 source device through compatible distribution to a video wall made of multiple legacy displays the ability to display an image cannot be guaranteed.
Versions
HDCP v2.x
The 2.x version of HDCP is not a continuation of HDCPv1, and is rather a completely different link protection. Version 2.x employs industry-standard encryption algorithms, such as 128-bit AES with 3072 or 1024-bit RSA public key and 256-bit HMAC-SHA256 hash function. While all of the HDCP v1.x specifications support backward compatibility to previous versions of the specification, HDCPv2 devices may interface with HDCPv1 hardware only by natively supporting HDCPv1, or by using a dedicated converter device. This means that HDCPv2 is only applicable to new technologies. It has been selected for the WirelessHD and Miracast (formerly WiFi Display) standards.
HDCP 2.x features a new authentication protocol, and a locality check to ensure the receiver is relatively close (it must respond to the locality check within 7 ms on a normal DVI/HDMI link). Version 2.1 of the specification was cryptanalyzed and found to have several flaws, including the ability to recover the session key.
There are still a few commonalities between HDCP v2 and v1.
Both are under DCP LLC authority.
They share the same license agreement, compliance rules and robustness rules.
They share the same revocation system and same device ID formats.
See also
HDCP repeater bit
Digital Transmission Content Protection
Digital rights management
Encrypted Media Extensions
Defective by Design
Trusted Computing
References
External links
Audiovisual introductions in 2000
Computer-related introductions in 2000
Broken stream ciphers
Copy protection
High-definition television
Intel products
Digital rights management standards | High-bandwidth Digital Content Protection | Technology | 3,157 |
14,485,522 | https://en.wikipedia.org/wiki/NGC%2087 | NGC 87 is a diffuse, highly disorganized barred irregular galaxy, part of Robert's Quartet, a group of four interacting galaxies.
One supernova has been observed in NGC 87: SN 1994Z (type II, mag. 14.6) was discovered Alexander Wassilieff on 2 October 1994.
See also
Robert's Quartet
List of NGC objects (1–1000)
References
External links
NGC 87
http://www.astro.pef.zcu.cz/
18340930
Barred irregular galaxies
194-8
0087
1357
Phoenix (constellation)
Robert's Quartet | NGC 87 | Astronomy | 127 |
69,265,960 | https://en.wikipedia.org/wiki/Ferrari%20Indy%20V8%20engine | Ferrari made a turbocharged, 2.65-liter, V-8, Indy racing engine, dubbed the Tipo 034, designed and purpose-built for competitive use in the CART PPG Indy Car World Series, but, although tested and unveiled to the press in 1986, never raced.
Technical
For an engine that was supposedly only a bargaining tool, the 637 was well-engineered and carefully thought out. The Type 034 engine was a turbocharged, 32-valve, 90-degree, 2.65-liter V8, as per the CART regulations, which used upward mounted exhausts. It had no intercooler, and ran of turbo boost pressure.
Applications
Ferrari 637
References
Engines by model
Ferrari engines
IndyCar Series
Champ Car
V8 engines
Ferrari in motorsport
Ferrari | Ferrari Indy V8 engine | Technology | 160 |
15,952,810 | https://en.wikipedia.org/wiki/Inductive%20output%20tube | The inductive output tube (IOT) or klystrode is a variety of linear-beam vacuum tube, similar to a klystron, used as a power amplifier for high frequency radio waves. It evolved in the 1980s to meet increasing efficiency requirements for high-power RF amplifiers in radio transmitters. The primary commercial use of IOTs is in UHF television transmitters, where they have mostly replaced klystrons because of their higher efficiencies (35% to 40%) and smaller size. IOTs are also used in particle accelerators. They are capable of producing power output up to about 30 kW continuous and 7 MW pulsed and power gains of 20–23 dB at frequencies up to about a gigahertz.
History
The inductive output tube (IOT) was invented in 1938 by Andrew V. Haeff. A patent was later issued for the IOT to Andrew V. Haeff and assigned to the Radio Corporation of America (RCA). During the 1939 New York World's Fair the IOT was used in the transmission of the first television images from the Empire State Building to the fair grounds. RCA sold a small IOT commercially for a short time, under the type number 825. It was soon made obsolete by newer developments, and the technology lay more or less dormant for years.
The inductive output tube has re-emerged within the last twenty years after having been discovered to possess particularly suitable characteristics (broadband linearity) for the transmission of digital television and high-definition digital television.
In research undertaken prior to the transition from analog to digital television broadcasting, it was discovered that electromagnetic interference from lightning, high voltage AC power transmission, AC rectifiers, and ballasts used in fluorescent lighting, greatly affected low-band VHF channels (In North America, channels 2,3,4,5, & 6) making it difficult to impossible to use them for digital television. These low-numbered channels were often the first television broadcasters in a given city, and were often large, vital operations which had no choice but to relocate to UHF. In so doing, it made modern digital television predominantly a UHF medium, and IOTs have become the output tube of choice for the power output section of those transmitters.
The power output of the modern 21st century IOTs is orders of magnitude higher than the first IOTs produced by the RCA in 1940–1941 but the fundamental principle of operation basically remains the same. IOTs since the 1970s have been designed with electromagnetic modeling computer software that has greatly improved their electrodynamic performance.
How it works
The IOT is a linear beam vacuum tube. As in the cathode-ray tube found in old televisions, electrons are produced by a heated negative electrode or cathode and accelerated by a high positive voltage in a structure called an electron gun at one end, forming a beam traveling down the tube. At the other end of the tube the beam does not produce a glowing phosphor picture as in a CRT, but passes through a resonant cavity which extracts its energy, then strikes a positive electrode and is absorbed.
IOTs have been described as a cross between a klystron and a tetrode, hence Eimac's trade name for them, Klystrode. They have an electron gun like a klystron, but with a control grid in front of it like a triode, with a very close spacing of around 0.1 mm. The high frequency RF voltage on the grid allows the electrons through in bunches. High voltage DC on a cylindrical anode accelerates the modulated electron beam through a small drift tube like a klystron. This drift tube prevents backflow of electromagnetic radiation. The bunched electron beam passes through the hollow anode into a resonant cavity, similar to the output cavity of a klystron, and strikes a collector electrode. As in a klystron, each bunch passes into the cavity at a time when the electric field decelerates it, transforming the kinetic energy of the beam into potential energy of the RF field, amplifying the signal. The oscillating electromagnetic energy in the cavity is extracted by a coaxial transmission line. An axial magnetic field prevents space charge spreading of the beam. The collector electrode is at a lower potential than the anode (depressed collector) which recovers some of the energy from the beam, increasing efficiency.
Two differences from the klystron give it a lower cost and higher efficiency. First, the klystron uses velocity modulation to create bunching; its beam current is constant. It requires a drift tube several feet in length to allow the electrons to bunch. In contrast the IOT uses current modulation like an ordinary triode; most of the bunching is done by the grid, so the tube can be much shorter, making it less expensive to build and mount, and less bulky. Secondly, since the klystron has beam current throughout the RF cycle, it can only operate as an inefficient class-A amplifier, while the grid of the IOT allows more versatile operating modes. The grid can be biased so the beam current can be cut off during part of the cycle, enabling it to operate in the more efficient class B or AB mode.
The highest frequency achievable in an IOT is limited by the grid-to-cathode spacing. The electrons must be accelerated off the cathode and pass the grid before the RF electric field reverses direction. The upper limit on frequency is approximately . The gain of the IOT is 20–23 dB versus 35–40 dB for a klystron. The lower gain is usually not a problem because at 20 dB the requirements for drive power (1% of output power) are within the capabilities of economical solid state UHF amplifiers.
Recent advances
The latest versions of IOTs achieve even higher efficiencies (60%-70%) through the use of a Multistage Depressed Collector (MSDC). One manufacturer's version is called the Constant Efficiency Amplifier (CEA), while another manufacturer markets their version as the ESCIOT (Energy Saving Collector IOT). The initial design difficulties of MSDCIOTs were overcome through the use of recirculating high dielectric transformer oil as a combined coolant and insulation medium to prevent arcing and erosion between the closely spaced collector stages and to provide reliable low-maintenance collector cooling for the life of the tube. Earlier MSDC versions had to be air cooled (limited power) or used de-ionized water that had to be filtered, regularly exchanged and provided no freezing or corrosion protection.
Disadvantages
Thermal radiation from the cathode heats the grid. As a result, low-work-function cathode material evaporates and condenses on the grid. This eventually leads to a short between cathode and grid, as the material accreting on the grid narrows the gap between it and the cathode. In addition, the emissive cathode material on the grid causes a negative grid current (reverse electron flow from the grid to the cathode). This can swamp the grid power supply if this reverse current gets too high, changing the grid (bias) voltage and, consequently, the operating point of the tube. Today's IOTs are equipped with coated cathodes that work at relatively low operating temperatures, and hence have slower evaporation rates, minimizing this effect.
Like most linear beam tubes having external tuning cavities, IOTs are vulnerable to arcing, and must be protected with arc detectors located in the output cavities that trigger a crowbar circuit based on a hydrogen thyratron or a triggered spark gap in the high-voltage supply. The purpose of the crowbar circuit is to instantly dump the massive electrical charge stored in the high voltage beam supply before this energy can damage the tube assembly during an uncontrolled cavity, collector or cathode arc.
See also
Free-electron laser
References
External links
http://www.bext.com/iot-an-old-dream-now-come-true/
http://www.ebu.ch/departments/technical/trev/trev_273-heppinstall.pdf
http://www.davidsarnoff.org/kil-chapter03.html
http://www.allaboutcircuits.com/vol_3/chpt_13/11.html
http://www.harris.com/view_pressrelease.asp?act=lookup&pr_id=2037
http://epaper.kek.jp/p95/ARTICLES/TAQ/TAQ02.PDF
Microwave technology
Television technology
Vacuum tubes | Inductive output tube | Physics,Technology | 1,827 |
3,434,894 | https://en.wikipedia.org/wiki/Surface-enhanced%20Raman%20spectroscopy | Surface-enhanced Raman spectroscopy or surface-enhanced Raman scattering (SERS) is a surface-sensitive technique that enhances Raman scattering by molecules adsorbed on rough metal surfaces or by nanostructures such as plasmonic-magnetic silica nanotubes. The enhancement factor can be as much as 1010 to 1011, which means the technique may detect single molecules.
History
SERS from pyridine adsorbed on electrochemically roughened silver was first observed by Martin Fleischmann, Patrick J. Hendra and A. James McQuillan at the Department of Chemistry at the University of Southampton, UK in 1973. This initial publication has been cited over 6000 times. The 40th Anniversary of the first observation of the SERS effect has been marked by the Royal Society of Chemistry by the award of a National Chemical Landmark plaque to the University of Southampton. In 1977, two groups independently noted that the concentration of scattering species could not account for the enhanced signal and each proposed a mechanism for the observed enhancement. Their theories are still accepted as explaining the SERS effect. Jeanmaire and Richard Van Duyne
proposed an electromagnetic effect, while Albrecht and Creighton
proposed a charge-transfer effect. Rufus Ritchie, of Oak Ridge National Laboratory's Health Sciences Research Division, predicted the existence of the surface plasmon.
Mechanisms
The exact mechanism of the enhancement effect of SERS is still a matter of debate in the literature. There are two primary theories and while their mechanisms differ substantially, distinguishing them experimentally has not been straightforward. The electromagnetic theory proposes the excitation of localized surface plasmons, while the chemical theory proposes the formation of charge-transfer complexes. The chemical theory is based on resonance Raman spectroscopy, in which the frequency coincidence (or resonance) of the incident photon energy and electron transition greatly enhances Raman scattering intensity. Research in 2015 on a more powerful extension of the SERS technique called SLIPSERS (Slippery Liquid-Infused Porous SERS) has further supported the EM theory.
Electromagnetic theory
The increase in intensity of the Raman signal for adsorbates on particular surfaces occurs because of an enhancement in the electric field provided by the surface. When the incident light in the experiment strikes the surface, localized surface plasmons are excited. The field enhancement is greatest when the plasmon frequency, ωp, is in resonance with the radiation ( for spherical particles). In order for scattering to occur, the plasmon oscillations must be perpendicular to the surface; if they are in-plane with the surface, no scattering will occur. It is because of this requirement that roughened surfaces or arrangements of nanoparticles are typically employed in SERS experiments as these surfaces provide an area on which these localized collective oscillations can occur. SERS enhancement can occur even when an excited molecule is relatively far apart from the surface which hosts metallic nanoparticles enabling surface plasmon phenomena.
The light incident on the surface can excite a variety of phenomena in the surface, yet the complexity of this situation can be minimized by surfaces with features much smaller than the wavelength of the light, as only the dipolar contribution will be recognized by the system. The dipolar term contributes to the plasmon oscillations, which leads to the enhancement. The SERS effect is so pronounced because the field enhancement occurs twice. First, the field enhancement magnifies the intensity of incident light, which will excite the Raman modes of the molecule being studied, therefore increasing the signal of the Raman scattering. The Raman signal is then further magnified by the surface due to the same mechanism that excited the incident light, resulting in a greater increase in the total output. At each stage the electric field is enhanced as E2, for a total enhancement of E4.
The enhancement is not equal for all frequencies. For those frequencies for which the Raman signal is only slightly shifted from the incident light, both the incident laser light and the Raman signal can be near resonance with the plasmon frequency, leading to the E4 enhancement. When the frequency shift is large, the incident light and the Raman signal cannot both be on resonance with ωp, thus the enhancement at both stages cannot be maximal.
The choice of surface metal is also dictated by the plasmon resonance frequency. Visible and near-infrared radiation (NIR) are used to excite Raman modes. Silver and gold are typical metals for SERS experiments because their plasmon resonance frequencies fall within these wavelength ranges, providing maximal enhancement for visible and NIR light. Copper's absorption spectrum also falls within the range acceptable for SERS experiments. Platinum and palladium nanostructures also display plasmon resonance within visible and NIR frequencies.
Chemical theory
Resonance Raman spectroscopy explains the huge enhancement of Raman scattering intensity. Intermolecular and intramolecular charge transfers significantly enhance Raman spectrum peaks. In particular, the enhancement is huge for species adsorbing the metal surface due to the high-intensity charge transfers from the metal surface with wide band to the adsorbing species. This resonance Raman enhancement is dominant in SERS for species on small nanoclusters with considerable band gaps, because surface plasmon appears only in metal surface with near-zero band gaps. This chemical mechanism probably occurs in concert with the electromagnetic mechanism for metal surface.
Surfaces
While SERS can be performed in colloidal solutions, today the most common method for performing SERS measurements is by depositing a liquid sample onto a silicon or glass surface with a nanostructured noble metal surface. While the first experiments were performed on electrochemically roughened silver, now surfaces are often prepared using a distribution of metal nanoparticles on the surface as well as using lithography or porous silicon as a support. Two dimensional silicon nanopillars decorated with silver have also been used to create SERS active substrates. The most common metals used for plasmonic surfaces in visible light SERS are silver and gold; however, aluminium has recently been explored as an alternative plasmonic material, because its plasmon band is in the UV region, contrary to silver and gold. Hence, there is great interest in using aluminium for UV SERS. It has, however, surprisingly also been shown to have a large enhancement in the infrared, which is not fully understood. In the current decade, it has been recognized that the cost of SERS substrates must be reduced in order to become a commonly used analytical chemistry measurement technique. To meet this need, plasmonic paper has experienced widespread attention in the field, with highly sensitive SERS substrates being formed through approaches such as soaking, in-situ synthesis, screen printing and inkjet printing.
The shape and size of the metal nanoparticles strongly affect the strength of the enhancement because these factors influence the ratio of absorption and scattering events. There is an ideal size for these particles, and an ideal surface thickness for each experiment. If concentration and particle size can be tuned better for each experiment this will go a long way in the cost reduction of substrates. Particles that are too large allow the excitation of multipoles, which are nonradiative. As only the dipole transition leads to Raman scattering, the higher-order transitions will cause a decrease in the overall efficiency of the enhancement. Particles that are too small lose their electrical conductance and cannot enhance the field. When the particle size approaches a few atoms, the definition of a plasmon does not hold, as there must be a large collection of electrons to oscillate together.
An ideal SERS substrate must possess high uniformity and high field enhancement. Such substrates can be fabricated on a wafer scale and label-free superresolution microscopy has also been demonstrated using the fluctuations of surface enhanced Raman scattering signal on such highly uniform, high-performance plasmonic metasurfaces.
Due to their unique physical and chemical properties, two-dimensional (2D) materials have gained significant attention as alternative substrates for surface-enhanced Raman spectroscopy (SERS). The use of 2D materials as SERS substrates offers several advantages over traditional metal substrates, including high sensitivity, reproducibility, and chemical stability.
Graphene is one of the most widely studied 2D materials for SERS applications. Graphene has a high surface area, high electron mobility, and excellent chemical stability, making it an attractive substrate for SERS. Graphene-based SERS sensors have also been shown to be highly reproducible and stable, making them attractive for real-world applications. In addition to graphene, other 2D materials, especially MXenes, have also been investigated for SERS applications. MXenes have a high surface area, good electrical conductivity, and chemical stability, making them attractive for SERS applications. As a result, MXene-based SERS sensors have been used to detect various analytes, including organic molecules, drugs and their metabolites.
As research and development continue, 2D materials-based SERS sensors will likely be more widely used in various industries, including environmental monitoring, healthcare, and food safety.
Applications
SERS substrates are used to detect the presence of low-abundance biomolecules, and can therefore detect proteins in bodily fluids. Early detection of pancreatic cancer biomarkers was accomplished using SERS-based immunoassay approach. A SERS-base multiplex protein biomarker detection platform in a microfluidic chip is used to detect several protein biomarkers to predict the type of disease and critical biomarkers and increase the chance of differentiating diseases with similar biomarkers like pancreatic cancer, ovarian cancer, and pancreatitis.
This technology has been utilized to detect urea and blood plasma label free in human serum and may become the next generation in cancer detection and screening.
The ability to analyze the composition of a mixture at a nanoscale makes the use of SERS substrates that are beneficial for environmental analysis, pharmaceuticals, material sciences, art and archaeological research, forensic science, drug and explosives detection, food quality analysis, and single algal cell detection.
SERS combined with plasmonic sensing can be used for high-sensitivity quantitative analysis of small molecules in human biofluids, the quantitative detection of biomolecular interaction, the detection of low-level cancer biomarkers via sandwich immunoassay platforms, the label-free characterization of exosomes, and the study of redox processes at a single-molecule level.
SERS is a powerful technique for determining structural information about molecular systems. It has found a wide range of applications in ultra-sensitive chemical sensing and environmental analyses.
A review of the present and future applications of SERS was published in 2020.
Selection rules
The term surface enhanced Raman spectroscopy implies that it provides the same information that traditional Raman spectroscopy does, simply with a greatly enhanced signal. While the spectra of most SERS experiments are similar to the non-surface enhanced spectra, there are often differences in the number of modes present. Additional modes not found in the traditional Raman spectrum can be present in the SERS spectrum, while other modes can disappear. The modes observed in any spectroscopic experiment are dictated by the symmetry of the molecules and are usually summarized by Selection rules. When molecules are adsorbed to a surface, the symmetry of the system can change, slightly modifying the symmetry of the molecule, which can lead to differences in mode selection.
One common way in which selection rules are modified arises from the fact that many molecules that have a center of symmetry lose that feature when adsorbed to a surface. The loss of a center of symmetry eliminates the requirements of the mutual exclusion rule, which dictates that modes can only be either Raman or infrared active. Thus modes that would normally appear only in the infrared spectrum of the free molecule can appear in the SERS spectrum.
A molecule's symmetry can be changed in different ways depending on the orientation in which the molecule is attached to the surface. In some experiments, it is possible to determine the orientation of adsorption to the surface from the SERS spectrum, as different modes will be present depending on how the symmetry is modified.
Remote SERS
Remote surface-enhanced Raman spectroscopy (SERS) consists of using metallic nanowaveguides supporting propagating surface plasmon polaritons (SPPs) to perform SERS at a distant location different to the one of the incident laser.
Propagating SPPs supported by nanowires has been used to show the remote excitation., as well as the remote detection of SERS. A silver nanowire was also used to show remote excitation and detection using graphene as Raman scatterer
Applications
Different plasmonic systems have already been used to show Raman detection of biomolecules in vivo in cells and remote excitation of surface catalytic reactions.
Immunoassays
SERS-based immunoassays can be used for detection of low-abundance biomarkers. For example, antibodies and gold particles can be used to quantify proteins in serum with high sensitivity and specificity.
Oligonucleotide targeting
SERS can be used to target specific DNA and RNA sequences using a combination of gold and silver nanoparticles and Raman-active dyes, such as Cy3. Specific single nucleotide polymorphisms (SNP) can be identified using this technique. The gold nanoparticles facilitate the formation of a silver coating on the dye-labelled regions of DNA or RNA, allowing SERS to be performed. This has several potential applications: For example, Cao et al. report that gene sequences for HIV, Ebola, Hepatitis, and Bacillus Anthracis can be uniquely identified using this technique. Each spectrum was specific, which is advantageous over fluorescence detection; some fluorescent markers overlap and interfere with other gene markers. The advantage of this technique to identify gene sequences is that several Raman dyes are commercially available, which could lead to the development of non-overlapping probes for gene detection.
See also
Tip-enhanced Raman spectroscopy
References
Surface science
Raman scattering
Raman spectroscopy
Plasmonics | Surface-enhanced Raman spectroscopy | Physics,Chemistry,Materials_science | 2,921 |
77,548,966 | https://en.wikipedia.org/wiki/Aeroflot%20Flight%20F-77 | Aeroflot Flight F-77 was an An-24B operating from Moscow to Bugulma with an intermediate stop in Cheboksary that crashed near Bugulma on 2 March 1986, resulting in the deaths of all 38 occupants on board.
Aircraft
The An-24B with tail number 46423 (serial number 87304108) was manufactured by the Antonov factory on February 20, 1968. At the time of the accident, the airliner had accumulated a total of 31,570 flight hours and 23,765 landings.
Preceding circumstances
The aircraft was operating flight F-77 from Moscow to Bugulma with an intermediate stop in Cheboksary. It was piloted by a crew from the 61st Flight Detachment, consisting of Captain V. A. Pastukhov, co-pilot A. S. Cheprasov, and flight engineer A. B. Shtein. Flight attendant N. A. Baskakova was working in the cabin. At 02:02 Moscow time, the An-24 took off from Cheboksary Airport and, after climbing, leveled off at a cruising altitude of 4,500 meters. There were 34 passengers on board: 32 adults and 2 children.
According to the weather forecast available to the crew, Bugulma was expected to have overcast conditions with a cloud base at 120 meters and an upper boundary at 3,000 meters, fresh southeast winds (160° 5 m/s), heavy snowfall, mist, and visibility of 1,500 meters. Occasionally, fog was expected, reducing horizontal visibility to 800 meters and vertical visibility to 80 meters. The actual weather in Bugulma almost matched the forecast, with visibility even reaching 4,000 meters — more than twice the expected. This weather was within the meteorological minimum for the captain.
As the aircraft approached Bugulma, at 02:54 Moscow time (52 minutes into the flight), the crew, after receiving clearance from the dispatcher, disconnected the autopilot and began descending to the circuit altitude of 400 meters, which they reached 20 kilometers from Bugulma airport. Following the dispatcher's instructions, the approach was made with a right turn according to ILS with a landing course of 192°. At 16 kilometers from the runway threshold, the crew made the fourth turn and aligned with the final approach. Without deviation from the operating manual, the landing gear and flaps were deployed to 15°. The flight speed was 230 km/h, and the engine mode was initially set to 28-30° on the thrust lever position indicator. At 03:04 Moscow time (63 minutes into the flight), the crew extended the flaps to the landing position (38°) as per the manual. Due to the increased aerodynamic drag, the engine mode was increased to 40° on the thrust lever position indicator.
Accident
However, a second after increasing the mode, at a speed of 225 km/h, the left engine's automatic feathering system spontaneously activated, feathering the left propeller. This caused asymmetrical thrust, resulting in a right yawing moment, and the aircraft began to bank to the left, reaching a 20° bank angle within 5 seconds, and deviated to the left. The crew noticed the failure of the left power unit almost immediately and attempted to counter the left bank by deflecting the ailerons to 19° for a right bank and pressing the right rudder pedal forcefully to turn the rudder right. However, by pressing the right pedal, the pilots only neutralized the rudder, as the aircraft began slipping to the left. The forces applied to the pedal (15 kg) merely held the rudder in a neutral position, failing to counteract the yawing moment. However, through aileron deflection, the crew managed to reduce the left bank to 9°.
Due to the high sideslip angle, speed began to decrease, prompting the pilots to push the control yokes forward, attempting to increase speed by pointing the nose down. However, this measure was ineffective, so the crew moved the remaining operational right engine to takeoff mode, forgetting that, according to the manual, they should first level the aircraft out of the left bank and into a right one. As a result, the left bank increased, exceeding 50°, and the sideslip and pitch angles also increased. Aerodynamic drag increased by 1.5 times, causing speed to drop. The crew attempted to correct the bank with full aileron and rudder deflection, but these measures were too late. By this time, the airliner was flying at a speed of 155 km/h with a sideslip angle of 18-21° and had deviated 50° from the landing course (to 142°).
At a speed of 140 km/h, the An-24 stalled, and its bank angle rapidly reached 110°. Twenty-five seconds after the left engine shutdown, the aircraft, with a 40° nose-down angle and a 3° left bank, flying at a heading of 15°, hit the ground at a forward speed of 320 km/h and a vertical speed of 40 m/s, 8 kilometers from the runway threshold on an azimuth of 15° (500 meters from the runway centerline). The airliner was completely destroyed on impact, and the debris scattered over an area of 136 by 40 meters, but no fire ensued. All 38 people on board perished.
Causes
According to data from the flight recorder, when the crew increased the engine mode after extending the flaps at 03:04, the left engine's feathering pump activated, leading to the feathering of the left power unit. Thus, the engine shutdown and propeller feathering occurred not due to engine failure but because of an electrical signal, with no reverse thrust applied during the flight.
The commission determined that this electrical signal was caused by a malfunction in the left engine's automatic feathering sensor DAF-24, as the micro switch KV-9-1's contacts closed due to wear on its stop and contact spring. The KV-9-1 micro switch in actual operational conditions within DAF-24 was not reliable against vibration loads, and from 1981 to 1985, there had been 22 cases of such failures. On the crashed An-24 CCCP-46423, there were also two previous cases of automatic feathering of the propeller on the left engine: on January 28, 1985, in level flight at an altitude of 6,000 meters and on February 21, 1986 (nine days before the crash) on the ground during takeoff preparation. The cause in the latter case was not identified and rectified. During periodic inspections of the DAF-24, conducted every 300±30 hours, detecting all instances of KV-9-1 micro switch wear was impossible, and the failures were not eliminated even after the industry implemented special measures.
Regarding the crew's actions, simulation results indicated that if the crew had intervened in the yaw control within the first eight seconds of the emergency situation (engine shutdown) and countered the yawing moment by deflecting the rudder to 10°, while half-deflecting the ailerons, the aircraft would have banked right and maintained straight flight on the set descent trajectory. The recommended actions in the manual for the crew during engine failure on final approach were correct.
Based on the investigation results, the following conclusions were made:
The spontaneous shutdown of the left engine and feathering of the propeller blades occurred due to the failure of the DAF-24 automatic feathering sensor because of wear on the KV-9-1 micro switch components. The defect was structural.
The aircraft's transition to high sideslip angles and subsequent stall were caused by the following erroneous crew actions:
Not deflecting the rudder to counter yaw after engine failure and insufficient rudder deflection after increasing the right engine to takeoff mode without first creating a bank towards the operational engine;
Uncoordinated countering of the yawing moment after engine failure (using only ailerons);
Insufficient forward control yoke deflection to counteract the pitch-up moment from sideslip, resulting in a loss of speed.
The crew had the opportunity to timely deflect the rudder (both in terms of effort and time) to counter the yaw after engine failure and to recover the aircraft from the bank and sideslip, restoring the original speed and flight direction.
The aircraft's stability and controllability characteristics after engine failure allowed for recovery from the bank and sideslip, and for restoring the original flight speed.
Conclusion (translated): "At night, in clouds, on the final approach with fully extended flaps and landing gear, spontaneous feathering of the propeller and shutdown of the left power unit occurred. In this situation, the crew made piloting errors, leading to a loss of speed and a stall, followed by the aircraft's collision with the ground."
Notes
F-77
March 1986
Aviation accidents and incidents caused by engine failure
Accidents and incidents involving the Antonov An-24 | Aeroflot Flight F-77 | Technology | 1,848 |
532,123 | https://en.wikipedia.org/wiki/Anti-twister%20mechanism | The anti-twister or antitwister mechanism is a method of connecting a flexible link between two objects, one of which is rotating with respect to the other, in a way that prevents the link from becoming twisted. The link could be an electrical cable or a flexible conduit.
This mechanism is intended as an alternative to the usual method of supplying electric power to a rotating device, the use of slip rings. The slip rings are attached to one part of the machine, and a set of fine metal brushes are attached to the other part. The brushes are kept in sliding contact with the slip rings, providing an electrical path between the two parts while allowing the parts to rotate about each other.
However, this presents problems with smaller devices. Whereas with large devices minor fluctuations in the power provided through the brush mechanism are inconsequential, in the case of tiny electronic components, the brushing introduces unacceptable levels of noise in the stream of power supplied. Therefore, a smoother means of power delivery is needed.
A device designed and patented in 1971 by Dale A. Adams and reported in The Amateur Scientist in December 1975, solves this problem with a rotating disk above a base from which a cable extends up, over, and onto the top of the disk. As the disk rotates the plane of this cable is rotated at exactly half the rate of the disk so the cable experiences no net twisting.
What makes the device possible is the peculiar connectivity of the space of 3D rotations, as discovered by P. A. M. Dirac and illustrated in his Plate trick (also known as the string trick or belt trick). Its covering Spin(3) group can be represented by unit quaternions, also known as versors.
See also
Quaternions and spatial rotation
Candle dance
References
External links
An anti-twist mechanism made with LEGO
Electrical generators
Mechanisms (engineering)
Spinors | Anti-twister mechanism | Physics,Technology,Engineering | 385 |
16,697,376 | https://en.wikipedia.org/wiki/Moco%20RNA%20motif | The Moco RNA motif is a conserved RNA structure that is presumed to be a riboswitch that binds molybdenum cofactor or the related tungsten cofactor. Genetic experiments support the hypothesis that the Moco RNA motif corresponds to a genetic control element that responds to changing concentrations of molybdenum or tungsten cofactor. As these cofactors are not available in purified form, in vitro binding assays cannot be performed. However, the genetic data, complex structure of the RNA and the failure to detect a protein involved in the regulation suggest that the Moco RNA motif corresponds to a class of riboswitches.
References
External links
Cis-regulatory RNA elements
Riboswitch | Moco RNA motif | Chemistry | 150 |
44,585 | https://en.wikipedia.org/wiki/Cyclotron | A cyclotron is a type of particle accelerator invented by Ernest Lawrence in 1929–1930 at the University of California, Berkeley, and patented in 1932. A cyclotron accelerates charged particles outwards from the center of a flat cylindrical vacuum chamber along a spiral path. The particles are held to a spiral trajectory by a static magnetic field and accelerated by a rapidly varying electric field. Lawrence was awarded the 1939 Nobel Prize in Physics for this invention.
The cyclotron was the first "cyclical" accelerator. The primary accelerators before the development of the cyclotron were electrostatic accelerators, such as the Cockcroft–Walton generator and the Van de Graaff generator. In these accelerators, particles would cross an accelerating electric field only once. Thus, the energy gained by the particles was limited by the maximum electrical potential that could be achieved across the accelerating region. This potential was in turn limited by electrostatic breakdown to a few million volts. In a cyclotron, by contrast, the particles encounter the accelerating region many times by following a spiral path, so the output energy can be many times the energy gained in a single accelerating step.
Cyclotrons were the most powerful particle accelerator technology until the 1950s, when they were surpassed by the synchrotron. Nonetheless, they are still widely used to produce particle beams for nuclear medicine and basic research. As of 2020, close to 1,500 cyclotrons were in use worldwide for the production of radionuclides for nuclear medicine. In addition, cyclotrons can be used for particle therapy, where particle beams are directly applied to patients.
History
Origins
In 1927, while a student at Kiel, German physicist Max Steenbeck was the first to formulate the concept of the cyclotron, but he was discouraged from pursuing the idea further. In late 1928 and early 1929, Hungarian physicist Leo Szilárd filed patent applications in Germany for the linear accelerator, cyclotron, and betatron. In these applications, Szilárd became the first person to discuss the resonance condition (what is now called the cyclotron frequency) for a circular accelerating apparatus. However, neither Steenbeck's ideas nor Szilard's patent applications were ever published and therefore did not contribute to the development of the cyclotron. Several months later, in the early summer of 1929, Ernest Lawrence independently conceived the cyclotron concept after reading a paper by Rolf Widerøe describing a drift tube accelerator. He published a paper in Science in 1930 (the first published description of the cyclotron concept), after a student of his built a crude model in April of that year. He patented the device in 1932.
To construct the first such device, Lawrence used large electromagnets recycled from obsolete arc converters provided by the Federal Telegraph Company. He was assisted by a graduate student, M. Stanley Livingston. Their first working cyclotron became operational on January 2, 1931. This machine had a diameter of , and accelerated protons to an energy up to 80 keV.
At the Radiation Laboratory on the campus of the University of California, Berkeley (now the Lawrence Berkeley National Laboratory), Lawrence and his collaborators went on to construct a series of cyclotrons which were the most powerful accelerators in the world at the time; a 4.8 MeV machine (1932), a 8 MeV machine (1937), and a 16 MeV machine (1939). Lawrence received the 1939 Nobel Prize in Physics for the invention and development of the cyclotron and for results obtained with it.
The first European cyclotron was constructed in 1934 in the Soviet Union by Mikhail Alekseevich Eremeev, at the Leningrad Physico-Technical Institute. It was a small design based a prototype by Lawrence, with a 28 cm diameter capable of achieving 530 keV proton energies. Research quickly refocused around the construction of a larger MeV-level cyclotron, in the physics department of the V.G. Khlopin Radium Institute in Leningrad, headed by . This instrument was first proposed in 1932 by George Gamow and and was installed and became operative in March 1937 at 100 cm (39 in) diameter and 3.2 MeV proton energies.
The first Asian cyclotron was constructed at the Riken laboratory in Tokyo, by a team including Yoshio Nishina, Sukeo Watanabe, Tameichi Yasaki, and Ryokichi Sagane. Yasaki and Sagane had been sent to Berkeley Radiation Laboratory to work with Lawrence. The device had a 26 in diameter and the first beam was produced on April 2, 1937, at 2.9 MeV deuteron energies.
During World War II
Cyclotrons played a key role in the Manhattan Project. The published 1940 discovery of neptunium and the withheld 1941 discovery of plutonium both used bombardment in the Berkeley Radiation Laboratory's 60 in cyclotron. Furthermore Lawrence invented the calutron (California University cyclotron), which was industrially developed at the Y-12 National Security Complex from 1942. This provided the bulk of the uranium enrichment process, taking low-enriched uranium (<5% uranium-235) from the S-50 and K-25 plants and electromagnetically separating isotopes up to 84.5% highly enriched uranium. This was the first production of HEU in history, and was shipped to Los Alamos and used in the Little Boy bomb dropped on Hiroshima, and its precursor Water Boiler and Dragon test reactors.
In France, Frédéric Joliot-Curie constructed a large 7 MeV cyclotron at the Collège de France in Paris, achieving the first beam in March 1939. With the Nazi occupation of Paris in June 1940 and an incoming contingent of German scientists, Joliot ceased research into uranium fission, and obtained an understanding with his German former colleague Wolfgang Gentner that no research of military use would be carried out. In 1943 Gentner was recalled for weakness, and a new German contingent attempted to operate the cyclotron. However, it is likely that Joliot, a member of French Communist Party and in fact president of the National Front resistance movement, sabotaged the cyclotron to prevent its use to the Nazi German nuclear program.
One cyclotron was built within Nazi Germany, in Heidelberg, under the supervision of Walther Bothe and Wolfgang Gentner, with support from the Heereswaffenamt. At the end of 1938, Gentner was sent to Berkeley Radiation Laboratory and worked most closely with Emilio Segrè and Donald Cooksey, returning before the start of the war. Construction was slowed by the war and completed in January 1944, but difficulties in testing made it unusable until the war's end.
Post-war
By the late 1930s it had become clear that there was a practical limit on the beam energy that could be achieved with the traditional cyclotron design, due to the effects of special relativity. As particles reach relativistic speeds, their effective mass increases, which causes the resonant frequency for a given magnetic field to change. To address this issue and reach higher beam energies using cyclotrons, two primary approaches were taken, synchrocyclotrons (which hold the magnetic field constant, but decrease the accelerating frequency) and isochronous cyclotrons (which hold the accelerating frequency constant, but alter the magnetic field).
Lawrence's team built one of the first synchrocyclotrons in 1946. This machine eventually achieved a maximum beam energy of 350 MeV for protons. However, synchrocyclotrons suffer from low beam intensities (< 1 μA), and must be operated in a "pulsed" mode, further decreasing the available total beam. As such, they were quickly overtaken in popularity by isochronous cyclotrons.
The first isochronous cyclotron (other than classified prototypes) was built by F. Heyn and K.T. Khoe in Delft, the Netherlands, in 1956. Early isochronous cyclotrons were limited to energies of ~50 MeV per nucleon, but as manufacturing and design techniques gradually improved, the construction of "spiral-sector" cyclotrons allowed the acceleration and control of more powerful beams. Later developments included the use of more compact and power-efficient superconducting magnets and the separation of the magnets into discrete sectors, as opposed to a single large magnet.
Principle of operation
Cyclotron principle
In a particle accelerator, charged particles are accelerated by applying an electric field across a gap. The force on a particle crossing this gap is given by the Lorentz force law:
where is the charge on the particle, is the electric field, is the particle velocity, and is the magnetic flux density. It is not possible to accelerate particles using only a static magnetic field, as the magnetic force always acts perpendicularly to the direction of motion, and therefore can only change the direction of the particle, not the speed.
In practice, the magnitude of an unchanging electric field which can be applied across a gap is limited by the need to avoid electrostatic breakdown. As such, modern particle accelerators use alternating (radio frequency) electric fields for acceleration. Since an alternating field across a gap only provides an acceleration in the forward direction for a portion of its cycle, particles in RF accelerators travel in bunches, rather than a continuous stream. In a linear particle accelerator, in order for a bunch to "see" a forward voltage every time it crosses a gap, the gaps must be placed further and further apart, in order to compensate for the increasing speed of the particle.
A cyclotron, by contrast, uses a magnetic field to bend the particle trajectories into a spiral, thus allowing the same gap to be used many times to accelerate a single bunch. As the bunch spirals outward, the increasing distance between transits of the gap is exactly balanced by the increase in speed, so a bunch will reach the gap at the same point in the RF cycle every time.
The frequency at which a particle will orbit in a perpendicular magnetic field is known as the cyclotron frequency, and depends, in the non-relativistic case, solely on the charge and mass of the particle, and the strength of the magnetic field:
where is the (linear) frequency, is the charge of the particle, is the magnitude of the magnetic field that is perpendicular to the plane in which the particle is travelling, and is the particle mass. The property that the frequency is independent of particle velocity is what allows a single, fixed gap to be used to accelerate a particle travelling in a spiral.
Particle energy
Each time a particle crosses the accelerating gap in a cyclotron, it is given an accelerating force by the electric field across the gap, and the total particle energy gain can be calculated by multiplying the increase per crossing by the number of times the particle crosses the gap.
However, given the typically high number of revolutions, it is usually simpler to estimate the energy by combining the equation for frequency in circular motion:
with the cyclotron frequency equation to yield:
The kinetic energy for particles with speed is therefore given by:
where is the radius at which the energy is to be determined. The limit on the beam energy which can be produced by a given cyclotron thus depends on the maximum radius which can be reached by the magnetic field and the accelerating structures, and on the maximum strength of the magnetic field which can be achieved.
K-factor
In the nonrelativistic approximation, the maximum kinetic energy per atomic mass for a given cyclotron is given by:
where is the elementary charge, is the strength of the magnet, is the maximum radius of the beam, is an atomic mass unit, is the charge of the beam particles, and is the atomic mass of the beam particles. The value of K
is known as the "K-factor", and is used to characterize the maximum kinetic beam energy of protons (quoted in MeV). It represents the theoretical maximum energy of protons (with Q and A equal to 1) accelerated in a given machine.
Particle trajectory
While the trajectory followed by a particle in the cyclotron is conventionally referred to as a "spiral", it is more accurately described as a series of arcs of constant radius. The particle speed, and therefore orbital radius, only increases at the accelerating gaps. Away from those regions, the particle will orbit (to a first approximation) at a fixed radius.
Assuming a uniform energy gain per orbit (which is only valid in the non-relativistic case), the average orbit may be approximated by a simple spiral. If the energy gain per turn is given by , the particle energy after turns will be:
Combining this with the non-relativistic equation for the kinetic energy of a particle in a cyclotron gives:
This is the equation of a Fermat spiral.
Stability and focusing
As a particle bunch travels around a cyclotron, two effects tend to make its particles spread out. The first is simply the particles injected from the ion source having some initial spread of positions and velocities. This spread tends to get amplified over time, making the particles move away from the bunch center. The second is the mutual repulsion of the beam particles due to their electrostatic charges. Keeping the particles focused for acceleration requires confining the particles to the plane of acceleration (in-plane or "vertical" focusing), preventing them from moving inward or outward from their correct orbit ("horizontal" focusing), and keeping them synchronized with the accelerating RF field cycle (longitudinal focusing).
Transverse stability and focusing
The in-plane or "vertical" focusing is typically achieved by varying the magnetic field around the orbit, i.e. with azimuth. A cyclotron using this focusing method is thus called an azimuthally-varying field (AVF) cyclotron. The variation in field strength is provided by shaping the steel poles of the magnet into sectors which can have a shape reminiscent of a spiral and also have a larger area towards the outer edge of the cyclotron to improve the vertical focus of the particle beam. This solution for focusing the particle beam was proposed by L. H. Thomas in 1938 and almost all modern cyclotrons use azimuthally-varying fields.
The "horizontal" focusing happens as a natural result of cyclotron motion. Since for identical particles travelling perpendicularly to a constant magnetic field the trajectory curvature radius is only a function of their speed, all particles with the same speed will travel in circular orbits of the same radius, and a particle with a slightly incorrect trajectory will simply travel in a circle with a slightly offset center. Relative to a particle with a centered orbit, such a particle will appear to undergo a horizontal oscillation relative to the centered particle. This oscillation is stable for particles with a small deviation from the reference energy.
Longitudinal stability
The instantaneous level of synchronization between a particle and the RF field is expressed by phase difference between the RF field and the particle. In the first harmonic mode (i.e. particles make one revolution per RF cycle) it is the difference between the instantaneous phase of the RF field and the instantaneous azimuth of the particle. Fastest acceleration is achieved when the phase difference equals 90° (modulo360°). Poor synchronization, i.e. phase difference far from this value, leads to the particle being accelerated slowly or even decelerated (outside of the 0–180° range).
As the time taken by a particle to complete an orbit depends only on particle's type, magnetic field (which may vary with the radius), and Lorentz factor (see ), cyclotrons have no longitudinal focusing mechanism which would keep the particles synchronized to the RF field. The phase difference, that the particle had at the moment of its injection into the cyclotron, is preserved throughout the acceleration process, but errors from imperfect match between the RF field frequency and the cyclotron frequency at a given radius accumulate on top of it. Failure of the particle to be injected with phase difference within about ±20° from the optimum may make its acceleration too slow and its stay in the cyclotron too long. As a consequence, half-way through the process the phase difference escapes the 0–180° range, the acceleration turns into deceleration, and the particle fails to reach the target energy. Grouping of the particles into correctly synchronized bunches before their injection into the cyclotron thus greatly increases the injection efficiency.
Relativistic considerations
In the non-relativistic approximation, the cyclotron frequency does not depend upon the particle's speed or the radius of the particle's orbit. As the beam spirals outward, the rotation frequency stays constant, and the beam continues to accelerate as it travels a greater distance in the same time period. In contrast to this approximation, as particles approach the speed of light, the cyclotron frequency decreases due to the change in relativistic mass. This change is proportional to the particle's Lorentz factor.
The relativistic mass can be written as:
where:
is the particle rest mass,
is the relative velocity, and
is the Lorentz factor.
Substituting this into the equations for cyclotron frequency and angular frequency gives:
The gyroradius for a particle moving in a static magnetic field is then given by:
Expressing the speed in this equation in terms of frequency and radius
yields the connection between the magnetic field strength, frequency, and radius:
Approaches to relativistic cyclotrons
Synchrocyclotron
Since increases as the particle reaches relativistic velocities, acceleration of relativistic particles requires modification of the cyclotron to ensure the particle crosses the gap at the same point in each RF cycle. If the frequency of the accelerating electric field is varied while the magnetic field is held constant, this leads to the synchrocyclotron.
In this type of cyclotron, the accelerating frequency is varied as a function of particle orbit radius such that:
The decrease in accelerating frequency is tuned to match the increase in gamma for a constant magnetic field.
Isochronous cyclotron
If instead the magnetic field is varied with radius while the frequency of the accelerating field is held constant, this leads to the isochronous cyclotron.
Keeping the frequency constant allows isochronous cyclotrons to operate in a continuous mode, which makes them capable of producing much greater beam current than synchrocyclotrons. On the other hand, as precise matching of the orbital frequency to the accelerating field frequency is the responsibility of the magnetic field variation with radius, the variation must be precisely tuned.
Fixed-field alternating gradient accelerator (FFA)
An approach which combines static magnetic fields (as in the synchrocyclotron) and alternating gradient focusing (as in a synchrotron) is the fixed-field alternating gradient accelerator (FFA). In an isochronous cyclotron, the magnetic field is shaped by using precisely machined steel magnet poles. This variation provides a focusing effect as the particles cross the edges of the poles. In an FFA, separate magnets with alternating directions are used to focus the beam using the principle of strong focusing. The field of the focusing and bending magnets in an FFA is not varied over time, so the beam chamber must still be wide enough to accommodate a changing beam radius within the field of the focusing magnets as the beam accelerates.
Classifications
Cyclotron types
There are a number of basic types of cyclotron:
Beam types
The particles for cyclotron beams are produced in ion sources of various types.
Target types
To make use of the cyclotron beam, it must be directed to a target.
Usage
Basic research
For several decades, cyclotrons were the best source of high-energy beams for nuclear physics experiments. With the advent of strong focusing synchrotrons, cyclotrons were supplanted as the accelerators capable of producing the highest energies. However, due to their compactness, and therefore lower expense compared to high energy synchrotrons, cyclotrons are still used to create beams for research where the primary consideration is not achieving the maximum possible energy. Cyclotron based nuclear physics experiments are used to measure basic properties of isotopes (particularly short lived radioactive isotopes) including half life, mass, interaction cross sections, and decay schemes.
Medical uses
Radioisotope production
Cyclotron beams can be used to bombard other atoms to produce short-lived isotopes with a variety of medical uses, including medical imaging and radiotherapy. Positron and gamma emitting isotopes, such as fluorine-18, carbon-11, and technetium-99m are used for PET and SPECT imaging. While cyclotron produced radioisotopes are widely used for diagnostic purposes, therapeutic uses are still largely in development. Proposed isotopes include astatine-211, palladium-103, rhenium-186, and bromine-77, among others.
Beam therapy
The first suggestion that energetic protons could be an effective treatment method was made by Robert R. Wilson in a paper published in 1946 while he was involved in the design of the Harvard Cyclotron Laboratory.
Beams from cyclotrons can be used in particle therapy to treat cancer. Ion beams from cyclotrons can be used, as in proton therapy, to penetrate the body and kill tumors by radiation damage, while minimizing damage to healthy tissue along their path.
As of 2020, there were approximately 80 facilities worldwide for radiotherapy using beams of protons and heavy ions, consisting of a mixture of cyclotrons and synchrotrons. Cyclotrons are primarily used for proton beams, while synchrotrons are used to produce heavier ions.
Advantages and limitations
The most obvious advantage of a cyclotron over a linear accelerator is that because the same accelerating gap is used many times, it is both more space efficient and more cost efficient; particles can be brought to higher energies in less space, and with less equipment. The compactness of the cyclotron reduces other costs as well, such as foundations, radiation shielding, and the enclosing building. Cyclotrons have a single electrical driver, which saves both equipment and power costs. Furthermore, cyclotrons are able to produce a continuous beam of particles at the target, so the average power passed from a particle beam into a target is relatively high compared to the pulsed beam of a synchrotron.
However, as discussed above, a constant frequency acceleration method is only possible when the accelerated particles are approximately obeying Newton's laws of motion. If the particles become fast enough that relativistic effects become important, the beam becomes out of phase with the oscillating electric field, and cannot receive any additional acceleration. The classical cyclotron (constant field and frequency) is therefore only capable of accelerating particles up to a few percent of the speed of light. Synchro-, isochronous, and other types of cyclotrons can overcome this limitation, with the tradeoff of increased complexity and cost.
An additional limitation of cyclotrons is due to space charge effects – the mutual repulsion of the particles in the beam. As the amount of particles (beam current) in a cyclotron beam is increased, the effects of electrostatic repulsion grow stronger until they disrupt the orbits of neighboring particles. This puts a functional limit on the beam intensity, or the number of particles which can be accelerated at one time, as distinct from their energy.
Notable examples
Superconducting cyclotron examples
A superconducting cyclotron uses superconducting magnets to achieve high magnetic field in a small diameter and with lower power requirements. These cyclotrons require a cryostat to house the magnet and cool it to superconducting temperatures. Some of these cyclotrons are being built for medical therapy.
Related technologies
The spiraling of electrons in a cylindrical vacuum chamber within a transverse magnetic field is also employed in the magnetron, a device for producing high frequency radio waves (microwaves). In the magnetron, electrons are bent into a circular path by a magnetic field, and their motion is used to excite resonant cavities, producing electromagnetic radiation.
A betatron uses the change in the magnetic field to accelerate electrons in a circular path. While static magnetic fields cannot provide acceleration, as the force always acts perpendicularly to the direction of particle motion, changing fields can be used to induce an electromotive force in the same manner as in a transformer. The betatron was developed in 1940, although the idea had been proposed substantially earlier.
A synchrotron is another type of particle accelerator that uses magnets to bend particles into a circular trajectory. Unlike in a cyclotron, the particle path in a synchrotron has a fixed radius. Particles in a synchrotron pass accelerating stations at increasing frequency as they get faster. To compensate for this frequency increase, both the frequency of the applied accelerating electric field and the magnetic field must be increased in tandem, leading to the "synchro" portion of the name.
In fiction
The United States Department of War famously asked for dailies of the Superman comic strip to be pulled in April 1945 for having Superman bombarded with the radiation from a cyclotron.
In the 1984 film Ghostbusters, a miniature cyclotron forms part of the proton pack used for catching ghosts.
See also
Cyclotron radiation – radiation produced by non-relativistic charged particles bent by a magnetic field
Fast neutron therapy – a type of beam therapy that may use accelerator produced beams
Microtron – an accelerator concept similar to the cyclotron which uses a linear accelerator type accelerating structure with a constant magnetic field.
Radiation reaction force – a braking force on beams that are bent in a magnetic field
Notes
References
Further reading
About a neighborhood cyclotron in Anchorage, Alaska.
An experiment done by Fred M. Niell, III his senior year of high school (1994–95) with which he won the overall grand prize in the ISEF.
External links
Current facilities
The 88-Inch Cyclotron at Lawrence Berkeley National Laboratory
PSI Proton Accelerator – the highest beam current cyclotron in the world.
The Superconducting Ring Cyclotron at the RIKEN Nishina Center for Accelerator Based Science – the highest energy cyclotron in the world
Rutgers Cyclotron – Students at Rutgers University built a 1 MeV cyclotron as an undergraduate project, which is now used for a senior-level undergraduate and a graduate lab course.
TRIUMF – the largest single-magnet cyclotron in the world.
Historic cyclotrons
Ernest Lawrence's Cyclotron A history of cyclotron development at the Berkeley Radiation Laboratory, now Lawrence Berkeley National Laboratory
National Superconducting Cyclotron Laboratory of the Michigan State University – Home of coupled K500 and K1200 superconducting cyclotrons; the K500, the first superconducting cyclotron, and the K1200, formerly the most powerful in the world.
1932 introductions
Accelerator physics
American inventions
Nuclear medicine
Particle accelerators | Cyclotron | Physics | 5,712 |
8,461,487 | https://en.wikipedia.org/wiki/CCL8 | Chemokine (C-C motif) ligand 8 (CCL8), also known as monocyte chemoattractant protein 2 (MCP2), is a protein that in humans is encoded by the CCL8 gene.
CCL8 is a small cytokine belonging to the CC chemokine family. The CCL8 protein is produced as a precursor containing 109 amino acids, which is cleaved to produce mature CCL8 containing 75 amino acids. The gene for CCL8 is encoded by 3 exons and is located within a large cluster of CC chemokines on chromosome 17q11.2 in humans. MCP-2 is chemotactic for and activates many different immune cells, including mast cells, eosinophils and basophils, (that are implicated in allergic responses), and monocytes, T cells, and NK cells that are involved in the inflammatory response. CCL8 elicits its effects by binding to several different cell surface receptors called chemokine receptors. These receptors include CCR1, CCR2B, CCR3 and CCR5.
CCL8 is a CC chemokine that utilizes multiple cellular receptors to attract and activate human leukocytes. CCL8 is a potent inhibitor of HIV1 by virtue of its high-affinity binding to the receptor CCR5, one of the major co-receptors for HIV1. In addition, CCL8 attributes to the growth of metastasis in breast cancer cells. The manipulation of this chemokine activity influences the histology of tumors promoting steps of metastatic processes. CCL8 is also involved in attracting macrophages to the decidua in labor.
References
External links
Further reading
Cytokines | CCL8 | Chemistry | 370 |
19,217,617 | https://en.wikipedia.org/wiki/Inocybe%20geophylla | Inocybe geophylla, commonly known as the earthy inocybe, common white inocybe or white fibercap, is a poisonous mushroom of the genus Inocybe. It is widespread and common in Europe and North America, appearing under both conifer and deciduous trees in summer and autumn. The fruiting body is a small all-white or cream mushroom with a fibrous silky umbonate cap and adnexed gills. An all-lilac variety lilacina is also common.
Taxonomy and naming
It was first described in 1799 as Agaricus geophyllus by English naturalist James Sowerby in his work Coloured Figures of English Fungi or Mushrooms. Christiaan Hendrik Persoon spelt it Agaricus geophilus in his 1801 work Synopsis methodica fungorum. Its specific epithet is derived from the Ancient Greek terms geo- "earth", and phyllon "leaf". It was given its current binomial name in 1871 by Paul Kummer.
A lilac form is known as var. lilacina; it was originally described as Agaricus geophyllus var. lilacinus by American mycologist Charles Horton Peck in 1872, who came across it in Bethlehem, New York. It was given its current name by Claude Casimir Gillet in 1876. It was classified as a separate species in 1918 by Calvin Henry Kauffman, who felt that it was consistently different and grew in different locales. A 2005 study of nuclear genes found that I. geophylla was closely related to I. fuscodisca, while I. lilacina came out as in a lineage with I. agglutinata and I. pudica.
Description
The cap is in diameter and white or cream-coloured with a silky texture, at first conical before flattening out to a more convex shape with a pronounced umbo (boss). The cap margins may split with age. The thin stipe is high and 0.3–0.6 cm thick and lacks a ring. It has a small bulb at the base, and often does not grow straight. The crowded gills are adnexed and cream early, before darkening to a brownish colour with the developing spores. The spore print is brown. The almond-shaped spores are smooth and measure around 9 × 5 μm. The faint smell has been likened to meal, damp earth, or even described as spermatic. The white or cream flesh has an acrid taste and does not change colour when cut or bruised.
Similar species
Larger mushrooms can be confused with members of the genus Tricholoma or the edible Calocybe gambosa, though these have a mealy smell and gills that remain white. In Israel, it is confused with edible mushrooms of the genus Tricholoma, particularly Tricholoma terreum, and Suillus granulatus, all of which grow in similar habitat. In North America it resembles mushrooms of the genus Camarophyllus.
The variety lilacina is similar in shape but tinted lilac all over, with an ochre-brown flush on the cap umbo and the base of the stem. It has a strong mealy or earthy odour. This variety could be mistaken for the edible amethyst deceiver (Laccaria amethystina), although the latter species has a fibrous stipe, a fruity smell and lacks the ochre-coloured umbo. It is a similar coloration to the wood blewit (Collybia nuda), although mushrooms of that species generally grow much larger.
I. pudica is also similar.
Distribution and habitat
Inocybe geophylla is common and widespread across Europe and North America. In western North America it is found under live oak, pine and Douglas fir. Both varieties are found in the Canadian Arctic regions of northern Manitoba and North West Territories, with the nominate form found in dryish tundra heath communities composed of American dwarf birch (Betula glandulosa), Arctic willow (Salix arctica), dwarf willow (S. herbacea), polar willow (S. polaris ssp. pseudopolaris), snow willow (Salix reticulata), bog bilberry (Vaccinium uliginosum var. alpinum), lingonberry (V. vitis-idaea var. minus), alpine bearberry (Arctostaphylos alpina), alpine bistort (Persicaria vivipara), Arctic bell-heather (Cassiope tetragona) and northern white mountain avens (Dryas integrifolia) and var. lilacina in moist mossy tundra heaths, alongside such plants as American dwarf birch, snow willow, Arctic bell-heather and northern white mountain avens. It is mycorrhizal, the fruiting bodies are found in deciduous and coniferous woodlands in summer and autumn. Within these locations, fruiting bodies may be found in grassy areas and near pathways, or often on rich, bare soil that has been disturbed at roadsides, and near ditches.
In Israel, I. geophylla grows under Palestine oak (Quercus calliprinos) and pines, with mushrooms still appearing in periods of little or no rain as they are mycorrhizal.
In Western Australia, Brandon Matheny and Neale Bougher (2005) pointed to collections of what was referred to as I. geophylla var. lilacina by some Australian taxonomists, as a misapplication of the name I. geophylla var. lilacina; the specimens have been reclassified as the species Inocybe violaceocaulis.
Toxicity
Like many fibrecaps, Inocybe geophylla contains muscarine. The symptoms are those of muscarine poisoning, namely, greatly increased salivation, perspiration (sweating), and lacrimation (tear flow) within 15–30 minutes of ingestion. With large doses, these symptoms may be followed by abdominal pain, severe nausea, diarrhea, blurred vision, and labored breathing. Intoxication generally subsides within two hours. Delirium does not occur.
The specific antidote is atropine. Inducing vomiting to remove mushroom contents is also prudent due to the speed of onset of symptoms. Death has not been recorded as a result of consuming this species. It is often ignored by mushroom hunters because of its small size.
References
geophylla
Fungi of Europe
Fungi of North America
Poisonous fungi
Fungi described in 1799
Taxa named by James Sowerby
Fungus species | Inocybe geophylla | Biology,Environmental_science | 1,370 |
67,816,840 | https://en.wikipedia.org/wiki/Mack%20Rides%20BigDipper | BigDipper is a type of steel roller coasters by Mack Rides. Being the first of its kind, the roller coaster Lost Gravity opened at Walibi Holland in March 2016.
Driving system
The trains run on steel rails. The short trains make it possible to drive through very tight curve radii. The roller coasters are basically driven by a chain lift hill, whereby the wagon after the lift covers the rest of the distance solely through gravity.
Trains
The trains consist of individual wagons with two rows each. In each row there are four seats next to each other, with the two inner seats placed above the rail and the outer two next to the rail. The outer seats are also slightly offset upwards. This means there is space for 8 people in one train.
Installations
References
External links
List of Installations on Roller Coaster Database
Roller coaster elements | Mack Rides BigDipper | Technology | 170 |
36,855,625 | https://en.wikipedia.org/wiki/T%20Cygni | T Cygni is a binary star system in the northern constellation of Cygnus. It is a faint system but visible to the naked eye with a combined apparent visual magnitude of 4.93. Based upon an annual Parallax shift of , it is located 387 light years away. It is moving closer to the Earth with a heliocentric radial velocity of −24 km/s.
The primary, component A, is a variable star, most likely of the slow irregular type, which ranges in magnitude from 4.91 down to 4.96. It is a giant star with a stellar classification of K3 III, which indicates it has exhausted the hydrogen at its core and evolved away from the main sequence. The star has expanded to 28 times the radius of the Sun. It is radiating 241 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 4,285 K.
The secondary companion, component B, is a magnitude 10.03 star located at an angular separation of along a position angle of 120°, as of 2012. In 1877 it was separated by with nearly the same position angle (121°).
References
K-type giants
Slow irregular variables
Cygnus (constellation)
Durchmusterung objects
198134
102571
7956
Cygni, T | T Cygni | Astronomy | 267 |
6,173,472 | https://en.wikipedia.org/wiki/Spatial%20ETL | Spatial extract, transform, load (spatial ETL), also known as geospatial transformation and load (GTL), is a process for managing and manipulating geospatial data, for example map data. It is a type of extract, transform, load (ETL) process, with software tools and libraries specialised for geographical information.
A common use of spatial ETL is to convert geographical information from a data source into another format that can be more easily used, for example by importing it into GIS software. A tool may translate data directly from one format to another, or via an intermediate format. Intermediate formats are often used when data transformation must be carried out.
Origins and history
Although ETL tools for processing non-spatial data have existed for some time, ETL tools that can manage the unique characteristics of spatial data only emerged in the early 1990s.
Spatial ETL tools emerged in the GIS industry to enable interoperability (or the exchange of information) between the industry's diverse array of mapping applications and associated proprietary formats. However, spatial ETL tools are also becoming increasingly important in the realm of management information systems as a tool to help organizations integrate spatial data with their existing non-spatial databases, and also to leverage their spatial data assets to develop more competitive business strategies.
Traditionally, GIS applications have had the ability to read or import a limited number of spatial data formats, but with few specialist ETL transformation tools; the concept being to import data then carry out step-by-step transformation or analysis within the GIS application itself. Conversely, spatial ETL does not require the user to import or view the data, and generally carries out its tasks in a single predefined process.
With the push to achieve greater interoperability within the GIS industry, many existing GIS applications are now incorporating spatial ETL tools within their products; the ArcGIS Data Interoperability Extension being an example of this.
Transformation
The transformation phase of a spatial ETL process allows a variety of functions; some of these are similar to standard ETL, but some are unique to spatial data. Spatial data commonly consists of a geographic element and related attribute data; therefore spatial ETL transformations are often described as being either geometric transformations – transformation of the geographic element – or attribute transformations – transformations of the related attribute data.
Common geospatial transformations
Reprojection: the ability to convert spatial data between one coordinate system and another.
Spatial transformations: the ability to model spatial interactions and calculate spatial predicates
Topological transformations: the ability to create topological relationships between disparate datasets
Resymbolisation: the ability to change the cartographic characteristics of a feature, such as colour or line-style
Geocoding: the ability to convert attributes of tabular data into spatial data
Additional features
Desirable features of a spatial ETL application are:
Data comparison: Ability to carry out change detection and perform incremental updates
Conflict management: Ability to manage conflicts between multiple users of the same data
Data dissemination: Ability to publish data via the internet or deliver by email regardless of source format
Semantic processing: Ability to understand the rules of different data formats to minimize user input whilst preserving meaning
Uses
Spatial ETL has a number of distinct uses:
Data cleansing: The removal of errors within a dataset
Data merging: The bringing together of multiple datasets into a common framework – conflation is a good example of this
Data verification: The comparison of multiple datasets for verification and quality assurance purposes
Data conversion: Conversion between different data formats.
Examples of spatial ETL tools
FME (Feature Manipulation Engine)
GDAL (Geospatial Data Abstraction Library)
See also
Business intelligence
Object–relational database
Spatial database
References
Geographic information systems
Extract, transform, load tools | Spatial ETL | Technology | 764 |
6,893,752 | https://en.wikipedia.org/wiki/Native%20contact | In protein folding, a native contact is a contact between the side chains of two amino acids that are not neighboring in the amino acid sequence (i.e., they are more than four residues apart in the primary sequence in order to remove trivial i to i+4 contacts along alpha helices) but are spatially close in the protein's native state tertiary structure. The fraction of native contacts reproduced in a particular structure is often used as a reaction coordinate for measuring the deviation from the native state of structures produced during molecular dynamics simulations or in benchmarks of protein structure prediction methods.
The contact order is a measure of the locality of a protein's native contacts; that is, the sequence distance between amino acids that form contacts. Proteins with low contact order are thought to fold faster and some may be candidates for downhill folding.
References
Protein structure | Native contact | Chemistry,Biology | 171 |
197,213 | https://en.wikipedia.org/wiki/Lead%20styphnate | Lead styphnate (lead 2,4,6-trinitroresorcinate, C6HN3O8Pb ), whose name is derived from styphnic acid, is an explosive used as a component in primer and detonator mixtures for less sensitive secondary explosives. Lead styphnate is only slightly soluble in water and methanol. Samples of lead styphnate vary in color from yellow to gold, orange, reddish-brown, to brown. Lead styphnate is known in various polymorphs, hydrates, and basic salts. Normal lead styphnate monohydrate, monobasic lead styphnate, tribasic lead styphnate dihydrate, and pentabasic lead styphnate dehydrate as well as α, β polymorphs of lead styphnate exist.
Lead styphnate forms six-sided crystals of the monohydrate and small rectangular crystals. Lead styphnate is particularly sensitive to fire and the discharge of static electricity. Long thin crystals are particularly sensitive. Lead styphnate does not react with other metals and is less sensitive to shock and friction than mercury fulminate or lead azide. It is stable in storage, even at elevated temperatures. As with other lead-containing compounds, lead styphnate is toxic owing to heavy metal poisoning.
Preparation
Lead styphnate (or, as it was then called, trinitro-orcinate) was discovered along with many other thrinitroresorcinate salts by British chemist John Stenhouse in 1871, the synthesis route involving action of trinitroresorcinol on lead acetate.
In 1919, Austrian chemist Edmund von Herz first established a preparation of anhydrous normal lead styphnate by the reaction of magnesium styphnate with lead acetate in the presence of nitric acid.
{C6N3O8}MgH2O + Pb(CH3CO2)2 → {C6N3O8}PbH2O + Mg(CH3CO2)2
Structure
Normal lead styphnate exists as α and β polymorphs, both being monoclinic crystals. The lead centres are seven-coordinate and are bridged via oxygen bridges. The water molecule is coordinated to the metal and is also hydrogen-bonded to the anion. Many of the Pb-O distances are short, indicating some degree of covalency. The styphnate ions lie in approximately parallel
planes linked by Pb atoms.
Properties
Lead styphnate's heat of formation is −835 kJ mol−1. The loss of water leads to the formation of a sensitive anhydrous material with a density of 2.9 g cm−3. The variation of colors remains unexplained. Lead styphnate has a detonation velocity of 5.2 km/s and an explosion temperature of 265–280 °C after five seconds.
Applications
Lead styphnate is mainly used in small arms ammunition for military and commercial applications. It serves as a primary explosive used in firearms primers, which will ignite upon a simple impact. It is similarly used in blank cartridges for powder-actuated nail guns. Lead styphnate is also used as primer in microthrusters for small satellite stationkeeping.
References
External links
National Pollutant Inventory - Lead and Lead Compounds Fact Sheet
Lead(II) compounds
Explosive chemicals
Nitrobenzene derivatives
Phenolates | Lead styphnate | Chemistry | 746 |
42,441,880 | https://en.wikipedia.org/wiki/Advisory%20circular |
Advisory circular (AC) refers to a type of publication offered by the Federal Aviation Administration (FAA) to "provide a single, uniform, agency-wide system … to deliver advisory (non-regulatory) material to the aviation community." Advisory circulars are now harmonized with soft law Acceptable Means of Compliance (AMC) publications of EASA, which are nearly identical in content. The FAA's Advisory Circular System is defined in FAA Order 1320.46D.
By writing advisory circulars, the FAA can provide guidance for compliance with airworthiness regulations, pilot certifications, operational standards, training standards, and any other rules within the 14 CFR Aeronautics and Space title, aka 14 CRF or FARs. The FAA also uses advisory circulars to officially recognize "acceptable means, but not the only means," of accomplishing or showing compliance with airworthiness regulations. Advisory circulars may also contain explanations, clarifications, best practices, or other information of use to the aviation community.
Usage
Advisory circulars can recognize industry standards from SAE (ARP), RTCA (DO), and others. With harmonization of technical content and guidance between EASA and the FAA, later advisory circulars also identify corresponding EUROCAE (ED) publications.
Some advisory circulars are only a few pages long and do little more than reference a recommended standard; for example, AC 20-152 referencing DO-254. Others, like AC 20-115C/D, are considerably longer; in this case including guidance on how to transition from DO-178 revision B to C while AC 20-152A adds several new objectives to an otherwise unchanged DO-254.
Relation to regulations
Generally informative in nature, Advisory Circulars are neither binding nor regulatory; yet some have the effect of de facto standards or regulations. The FAA establishes regulation of U.S. civil airspace through issuance of Federal Aviation Regulations (FAR). Issuing or amending FARs requires a potentially lengthy period of public commentary and agency reflection on proposed rule making before they may be issued for enforcement. In practice, advisory circulars have essential roles for public compliance with the regulations. The FAA relies on the advisory circular system to
"Provide an acceptable, clearly understood method for complying with a regulation"
"Standardize implementation of a regulation or harmonize implementation for the international aviation community"
"Resolve a general misunderstanding of a regulation"
"Help the industry and FAA effectively implement a regulation"
In contrast with the lengthy processes of FARs, advisory circulars may be published with little or no advanced notice or distribution. A concern is that the content of advisory circulars should not have the effect of de facto amendments to regulations. In general, the FAA may not "use an AC to add, reduce, or change a regulatory requirement."
See also
Airworthiness Directive (in comparison, airworthiness directives are legally enforceable rules)
References
Avionics
Federal Aviation Administration | Advisory circular | Technology | 600 |
23,128,285 | https://en.wikipedia.org/wiki/Gliese%20752 | Gliese 752 is a binary star system in the Aquila constellation. This system is relatively nearby, at a distance of .
The Gliese 752 system consists of two M-type stars. The primary star is the magnitude 9 Gliese (GJ) 752 A. The secondary star is the dim magnitude 17 Gliese (GJ) 752 B, more commonly referred to as VB 10. This stellar pair form a binary star system separated by about 74 arc seconds (~434 AU). This system is also known for its high proper motion of about 1 arc second a year. Component A has one known exoplanet.
The name and number are from the Catalogue of Nearby Stars, published by German astronomer Wilhelm Gliese in 1969.
Gliese 752 A characteristics
The primary star, also known as Wolf 1055, is a type M2.5 red dwarf with about half the size and mass as the Sun and considerably cooler at . This star was first observed to be a high proper motion star by the German astronomer Max Wolf with his pioneering use of astrophotography. He added this star to his extensive catalog of such stars in 1919. It is a variable star with the variable star catalog name V1428 Aquilae. It is a BY Draconis type variable star subject to flare events.
Planetary system
In August 2018, a group of scientists using measurements taken from the CARMENES spectrograph, on the Calar Alto Observatory located in Spain, announced they had detected a planet orbiting the larger of the stars, HD 180617 (Gliese 752 A). The measurements indicated the presence of a planet with a minimum mass comparable to Neptune on an orbit partly located within the habitable zone.
Gliese 752 B characteristics
Gliese 752 was not known to be a binary star system until the discovery of a small dim secondary star by George Van Biesbroeck in 1944. This star is identified as VB 10 in Van Biesbroeck's star catalog. This star is notable for its very low mass. At .08 solar masses, it is near the lower mass limit for a star. It is also quite small at 10% of the solar radius. A type M8V red dwarf, the star is known for its very low luminosity (one of the least luminous stars yet observed) with an absolute magnitude of nearly 19, due to its very cool surface temperature of only 2600K. It is a variable star with the variable star catalog name V1298 Aquilae. This star is a UV Ceti type variable star also subject to flare events. It shares the large proper motion, along with the tendency to flare, with the primary star.
In 2009, the discovery of the extrasolar planet, VB 10b, was announced in orbit around this star. However a subsequent spectrographic survey failed to confirm the presence of any large planets in orbit around this star.
Magnetic field
In 1994, the Hubble Space Telescope observed a solar flare on Gliese 752 B. This suggests that the star has a strong magnetic field, which came as a surprise to astronomers. It had previously been assumed that low mass red dwarfs would have insignificant or nonexistent magnetic fields. These tiny dwarfs are supposed to lack the radiative zone just outside the star's core that creates the magnetic field-creating dynamos in more massive stars like our Sun. Nevertheless, the detection of solar flares indicates that as yet unknown process allows low mass stars to produce sufficient magnetic fields to power such outbursts, even if solely by convection, without a radiative core.
See also
Binary star
List of least massive stars
References
Aquila (constellation)
094761
M-type main-sequence stars
Flare stars
180617
0752
Binary stars
Planetary systems with one confirmed planet
BD+04 4048
Aquilae, V1428/V1298
BY Draconis variables | Gliese 752 | Astronomy | 814 |
56,183,086 | https://en.wikipedia.org/wiki/Ivan%20Marusic | Ivan Marusic (born 1965) is an Australian engineer and physicist. He is known for his work on turbulence at high Reynolds number, using both theoretical and experimental approaches.
Marusic was born to Croatian parents in Široki Brijeg in Bosnia and Herzegovina. He emigrated to Australia when he was three years old along with his parents and older sister. He grew up in Melbourne.
He received his PhD in 1992 and a bachelor's degree in mechanical engineering in 1987 from the University of Melbourne. From 1998 to 2002 he was a faculty member at the University of Minnesota, USA, where he was a recipient of an NSF Career Award, Packard Fellowship in Science and Engineering and Taylor Career Development Award. He received an ARC Federation Fellowship in 2006, ARC Laureate Fellowship in 2012 and since 2014 is an elected Fellow of the Australian Academy of Science. In 2010 Marusic was elected a Fellow of the American Physical Society. He was awarded a 2016 APS Stanley Corrsin Award for fluid dynamics research. He was elected a Fellow of the Australian Academy of Technology and Engineering in 2021 and of the Royal Society in 2024.
References
Living people
1965 births
Australian physicists
Australian engineers
Fellows of the Australian Academy of Science
Fellows of the American Physical Society
Fellows of the Australian Academy of Technological Sciences and Engineering
Fellows of the Royal Society
Fluid dynamicists | Ivan Marusic | Chemistry | 266 |
8,736,036 | https://en.wikipedia.org/wiki/Outline%20of%20the%20Internet | The following outline is provided as an overview of and topical guide to the Internet.
The Internet is a worldwide, publicly accessible network of interconnected computer networks that transmit data by packet switching using the standard Internet Protocol (IP). It is a "network of networks" that consists of millions of interconnected smaller domestic, academic, business, and government networks, which together carry various information and services, such as electronic mail, online chat, file transfer, and the interlinked Web pages and other documents of the World Wide Web.
Internet features
Hosting –
File hosting –
Web hosting
E-mail hosting
DNS hosting
Game servers
Wiki farms
World Wide Web –
Websites –
Web applications –
Webmail –
Online shopping –
Online auctions –
Webcomics –
Wikis –
Voice over IP
IPTV
Internet communication technology
Internet infrastructure
Critical Internet infrastructure –
Internet access –
Internet access in the United States –
Internet service provider –
Internet backbone –
Internet exchange point (IXP) –
Internet standard –
Request for Comments (RFC) –
Internet communication protocols
Internet protocol suite –
Link layer
Link layer –
Address Resolution Protocol (ARP/InARP) –
Neighbor Discovery Protocol (NDP) –
Open Shortest Path First (OSPF) –
Tunneling protocol (Tunnels) –
Layer 2 Tunneling Protocol (L2TP) –
Point-to-Point Protocol (PPP) –
Medium access control –
Ethernet –
Digital subscriber line (DSL) –
Integrated Services Digital Network (ISDN) –
Fiber Distributed Data Interface (FDDI) –
Internet layer
Internet layer –
Internet Protocol (IP) –
IPv4 –
IPv6 –
Internet Control Message Protocol (ICMP) –
ICMPv6 –
Internet Group Management Protocol (IGMP) –
IPsec –
Transport layer
Transport layer –
Transmission Control Protocol (TCP) –
User Datagram Protocol (UDP) –
Datagram Congestion Control Protocol (DCCP) –
Stream Control Transmission Protocol (SCTP) –
Resource reservation protocol (RSVP) –
Explicit Congestion Notification (ECN) –
QUIC
Application layer
Application layer –
Border Gateway Protocol (BGP) –
Dynamic Host Configuration Protocol (DHCP) –
Domain Name System (DNS) –
File Transfer Protocol (FTP) –
Hypertext Transfer Protocol (HTTP) –
Internet Message Access Protocol (IMAP) –
Internet Relay Chat (IRC) –
LDAP –
Media Gateway Control Protocol (MGCP) –
Network News Transfer Protocol (NNTP) –
Network Time Protocol (NTP) –
Post Office Protocol (POP) –
Routing Information Protocol (RIP) –
Remote procedure call (RPC) –
Real-time Transport Protocol (RTP) –
Session Initiation Protocol (SIP) –
Simple Mail Transfer Protocol (SMTP) –
Simple Network Management Protocol (SNMP) –
SOCKS –
Secure Shell (SSH) –
Telnet –
Transport Layer Security (TLS/SSL) –
Extensible Messaging and Presence Protocol (XMPP) –
History of the Internet
Networks prior to the Internet
NPL network – a local area computer network operated by a team from the National Physical Laboratory in England, the first to implement packet switching, the design of which influenced other networks that followed.
ARPANET – the first wide-area packet switching network, developed by the Advanced Research Projects Agency in the United States, and one of the first networks to implement the TCP/IP protocol suite which later became a technical foundation of the Internet.
SATNET – an early satellite packet-switched network, also developed by the Advanced Research Projects Agency, which implemented TCP/IP before the ARPANET.
Merit Network – a computer network created in 1966 to connect the mainframe computers at universities that is currently the oldest running regional computer network in the United States.
CYCLADES – a French research network created in the early 1970s that pioneered the concept of internetworking by making the hosts responsible for the reliable delivery of data on a packet-switched network, rather than this being a service of the network itself.
Computer Science Network (CSNET) – a computer network created in the United States for computer science departments at academic and research institutions that could not be directly connected to ARPANET, due to funding or authorization limitations. It played a significant role in spreading awareness of, and access to, national networking and was a major milestone on the path to development of the global Internet.
National Science Foundation Network (NSFNET) – an American networking project, initially created to link researchers to the NSF-funded supercomputing centers that, through further public funding and private industry partnerships, developed into a major part of the early Internet backbone.
History of Internet components
History of packet switching – a method of grouping data into packets that are transmitted over a digital network, conceived independently by Paul Baran and Donald Davies in the early and mid-1960s.
History of communication protocols – the set of rules to enable data communication between computers on a network.
History of internetworking – networking between computers on different networks.
very high speed Backbone Network Service (vBNS) –
Network access point (NAP) –
Federal Internet Exchange (FIX) –
Commercial Internet eXchange (CIX) –
List of Internet pioneers
Timeline of Internet conflicts
Internet usage
Global Internet usage
Internet traffic
List of countries by number of Internet users
List of European countries by number of Internet users
List of sovereign states by number of broadband Internet subscriptions
List of sovereign states by number of Internet hosts
Languages used on the Internet
List of countries by IPv4 address allocation
Internet Census of 2012
Internet politics
Internet privacy – a subset of data privacy concerning the right to privacy from third parties including corporations and governments on the Internet.
Censorship – the suppression of speech, public communication, or other information, on the basis that such material is considered objectionable, harmful, sensitive, politically incorrect or "inconvenient" as determined by government authorities or by community consensus.
Censorship by country – the extent of censorship varies between countries and sometimes includes restrictions to freedom of the Press, freedom of speech, and human rights.
Internet censorship – the control or suppression of what can be accessed, published, or viewed on the Internet enacted by regulators or self-censorship.
Content control software – a type of software that restricts or controls the content an Internet user is capable to access.
Internet censorship and surveillance by country
Internet censorship circumvention – the use of techniques and processes to bypass filtering and censored online materials.
Internet law – law governing the Internet, including dissemination of information and software, information security, electronic commerce, intellectual property in computing, privacy, and freedom of expression.
Internet organizations
Domain name registry or Network Information Center (NIC) – a database of all domain names and the associated registrant information in the top level domains of the Domain Name System of the Internet that allow third party entities to request administrative control of a domain name.
Private sub-domain registry – an NIC which allocates domain names in a subset of the Domain Name System under a domain registered with an ICANN-accredited or ccTLD registry.
Internet Society (ISOC) – an American non-profit organization founded in 1992 to provide leadership in Internet-related standards, education, access, and policy.
InterNIC (historical) – the organization primarily responsible for Domain Name System (DNS) domain name allocations until 2011 when it was replaced by ICANN.
Internet Corporation for Assigned Names and Numbers (ICANN) – a nonprofit organization responsible for coordinating the maintenance and procedures of several databases related to the namespaces of the Internet, ensuring the network's stable and secure operation.
Internet Assigned Numbers Authority (IANA) – a department of ICANN which allocates domain names and maintains IP addresses.
Internet Activities Board (IAB) –
Internet Engineering Task Force (IETF) –
Non-profit Internet organizations
Advanced Network and Services (ANS) (historical) –
Internet2 –
Merit Network –
North American Network Operators' Group (NANOG) –
Commercial Internet organizations
Amazon.com –
ANS CO+RE (historical) –
Google – an American multinational technology company that specializes in Internet-related services and products, which include online advertising technologies, search engine, cloud computing, software, and hardware.
Cultural and societal implications of the Internet
Sociology – the scientific study of society, including patterns of social relationships, social interaction, and culture.
Sociology of the Internet – the application of sociological theory and methods to the Internet, including analysis of online communities, virtual worlds, and organizational and social change catalyzed through the Internet.
Digital sociology – a sub-discipline of sociology that focuses on understanding the use of digital media as part of everyday life, and how these various technologies contribute to patterns of human behavior, social relationships and concepts of the self.
Internet culture
List of web awards
Underlying technology
MOSFET (MOS transistor)
CMOS (complementary MOS)
LDMOS (lateral diffused MOS)
Power MOSFET
RF CMOS (radio frequency CMOS)
Optical networking
Fiber-optic communication
Laser
Optical fiber
Telecommunications network
Modem
Telecommunication circuit
Wireless network
Base station
Cellular network
RF power amplifier
Router
Transceiver
By region
Internet in Africa
By country
Internet in Afghanistan
Internet in Australia
Internet in Azerbaijan
Internet in China
Internet in Egypt
Internet in Myanmar
Internet in New Zealand
Internet in the Philippines
Internet in South Africa
Internet in the United Kingdom
Internet in the United States
See also
Outline of information technology
Further reading
Yeo, ShinJoung. (2023) Behind the Search Box: Google and the Global Internet Industry (U of Illinois Press, 2023) ISBN 10:0252087127 online
Internet
Internet | Outline of the Internet | Technology | 1,940 |
31,949,462 | https://en.wikipedia.org/wiki/National%20Institute%20for%20Environmental%20Studies | The National Institute for Environmental Studies (NIES:国立環境研究所, Kokuritsu-Kankyō kenkyūsho) was established in 1974 as a focal point for environmental research in Japan. In 2001 it became an Independent Administrative Institution.
NIES is organised into eight centers, each of which is subdivided into a further number of sections responsible for different specializations within the broader field to which they belong.
The eight centers are responsible for research in eight different fields, with programs dedicated to these research areas.
History
July 1971 Environment Agency established
November 1971 NIES Founding Committee established
March 1974 National Institute for Environmental Studies established
April 1985 Emperor Showa visits NIES
July 1990 Restructuring of NIES to include global environmental research
October 1990 Center for Global Environmental Research established
January 2001 Environment Agency becomes Ministry of the Environment.
Waste Management Division established at NIES
April 2001 NIES established as an incorporated administrative agency.
First five-year plan (2001–2005) commences
List of Independent Administrative Institutions (Japan)
References
External links
Official web site of NIES
Research institutes in Japan
International research institutes
Organizations established in 2001
Independent Administrative Institutions of Japan
Environmental research institutes
Environmental studies institutions in Japan
2001 establishments in Japan | National Institute for Environmental Studies | Environmental_science | 241 |
9,421,904 | https://en.wikipedia.org/wiki/Stream%20thrust%20averaging | In fluid dynamics, stream thrust averaging is a process used to convert three-dimensional flow through a duct into one-dimensional uniform flow. It makes the assumptions that the flow is mixed adiabatically and without friction. However, due to the mixing process, there is a net increase in the entropy of the system. Although there is an increase in entropy, the stream thrust averaged values are more representative of the flow than a simple average as a simple average would violate the second law of thermodynamics.
Equations for a perfect gas
Stream thrust:
Mass flow:
Stagnation enthalpy:
Solutions
Solving for yields two solutions. They must both be analyzed to determine which is the physical solution. One will usually be a subsonic root and the other a supersonic root. If it is not clear which value of velocity is correct, the second law of thermodynamics may be applied.
Second law of thermodynamics:
The values and are unknown and may be dropped from the formulation. The value of entropy is not necessary, only that the value is positive.
One possible unreal solution for the stream thrust averaged velocity yields a negative entropy. Another method of determining the proper solution is to take a simple average of the velocity and determining which value is closer to the stream thrust averaged velocity.
References
Equations of fluid dynamics
Fluid dynamics | Stream thrust averaging | Physics,Chemistry,Engineering | 272 |
4,815,512 | https://en.wikipedia.org/wiki/Toxicity%20class | Toxicity class refers to a classification system for pesticides that has been created by a national or international government-related or -sponsored organization. It addresses the acute toxicity of agents such as soil fumigants, fungicides, herbicides, insecticides, miticides, molluscicides, nematicides, or rodenticides.
General considerations
Assignment to a toxicity class is based typically on results of acute toxicity studies such as the determination of values in animal experiments, notably rodents, via oral, inhaled, or external application. The experimental design measures the acute death rate of an agent. The toxicity class generally does not address issues of other potential harm of the agent, such as bioaccumulation, issues of carcinogenicity, teratogenicity, mutagenic effects, or the impact on reproduction.
Regulating agencies may require that packaging of the agent be labeled with a signal word, a specific warning label to indicate the level of toxicity.
By jurisdiction
World Health Organization
The World Health Organization (WHO) names four toxicity classes:
Class I – a: extremely hazardous
Class I – b: highly hazardous
Class II: moderately hazardous
Class III: slightly hazardous
The system is based on LD50 determination in rats, thus an oral solid agent with an LD50 at 5 mg or less/kg bodyweight is Class Ia, at 5–50 mg/kg is Class Ib, LD50 at 50–2000 mg/kg is Class II, and at LD50 at the concentration more than 2000 mg/kg is classified as Class III. Values may differ for liquid oral agents and dermal agents.
European Union
There are eight toxicity classes in the European Union's classification system, which is regulated by Directive 67/548/EEC:
Class I: very toxic
Class II: toxic
Class III: harmful
Class IV : corrosive
Class V : irritant
Class VI : sensitizing
Class VII : carcinogenic
Class VIII : mutagenic
Very toxic and toxic substances are marked by the European toxicity symbol.
India
The Indian standardized system of toxicity labels for pesticides uses a 4-color system (red, yellow, blue, green) to plainly label containers with the toxicity class of the contents.
United States
The United States Environmental Protection Agency (EPA) uses four toxicity classes in its toxicity category rating. Classes I to III are required to carry a signal word on the label. Pesticides are regulated in the United States primarily by the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA).
Toxicity class I
most toxic;
requires signal word: "Danger-Poison", with skull and crossbones symbol, possibly followed by:
"Fatal if swallowed", "Poisonous if inhaled", "Extremely hazardous by skin contact--rapidly absorbed through skin", or "Corrosive--causes eye damage and severe skin burns"
Class I materials are estimated to be fatal to an adult human at a dose of less than 5 grams (less than a teaspoon).
Toxicity class II
moderately toxic
signal word: "Warning", possibly followed by:
"Harmful or fatal if swallowed", "Harmful or fatal if absorbed through the skin", "Harmful or fatal if inhaled", or "Causes skin and eye irritation"
Class II materials are estimated to be fatal to an adult human at a dose of 5 to 30 grams.
Toxicity class III
slightly toxic
Signal word: Caution, possibly followed by:
"Harmful if swallowed", "May be harmful if absorbed through the skin", "May be harmful if inhaled", or "May irritate eyes, nose, throat, and skin"
Class III materials are estimated to be fatal to an adult human at some dose in excess of 30 grams.
Toxicity class IV
practically nontoxic
no signal word required since 2002
General versus restricted use
Furthermore, the EPA classifies pesticides into those anybody can apply (general use pesticides), and those that must be applied by or under the supervision of a certified individual. Application of restricted use pesticides requires that a record of the application be kept.
See also
Dangerous goods
Hazard symbol
Globally Harmonized System
References
WHO Classification document
Reading the label
Canada toxicity symbols
Protect Yourself
Pesticide ratings
Signal Words Fact Sheet - National Pesticide Information Center
Toxicology
Pesticides | Toxicity class | Biology,Environmental_science | 865 |
7,530,642 | https://en.wikipedia.org/wiki/Continuous%20Media%20Markup%20Language | Continuous Media Markup Language (CMML) is to audio or video what HTML is to text. CMML is essentially a timed text codec. It allows file creators to structure a time-continuously sampled data file by dividing it into temporal sections (also called clips), and provides these clips with some additional information. This information is HTML-like and is essentially a textual representation of the audio or video file. CMML enables textual searches on these otherwise binary files.
CMML is appropriate for use with all Ogg media formats, to provide subtitles and timed metadata.
CMML is deprecated; Xiph.Org Foundation recommends use Kate instead.
Example of CMML Content
<cmml>
<stream timebase="0">
<import src="galaxies.ogv" contenttype="video/ogg"/>
</stream>
<head>
<title>Hidden Galaxies</title>
<meta name="author" content="CSIRO"/>
</head>
<clip id="findingGalaxies" start="15">
<a href="http://www.aao.gov.au/galaxies.anx#radio">
Related video on detection of galaxies
</a>
<img src="galaxy.jpg"/>
<desc>What's out there?</desc>
<meta name="KEYWORDS" content="Radio Telescope"/>
</clip>
</cmml>
References
External links
CMML Overview
The origin of the CMML document, along with further documentation and standards can be found at
Open formats
XML-based standards
Xiph.Org projects
Subtitle file formats | Continuous Media Markup Language | Technology | 356 |
31,276,757 | https://en.wikipedia.org/wiki/Yao%20graph | In computational geometry, the Yao graph, named after Andrew Yao, is a kind of geometric spanner, a weighted undirected graph connecting a set of geometric points with the property that, for every pair of points in the graph, their shortest path has a length that is within a constant factor of their Euclidean distance.
The basic idea underlying the two-dimensional Yao graph is to surround each of the given points by equally spaced rays, partitioning the plane into sectors with equal angles, and to connect each point to its nearest neighbor in each of these sectors. Associated with a Yao graph is an integer parameter which is the number of rays and sectors described above; larger values of produce closer approximations to the Euclidean distance. The stretch factor is at most , where is the angle of the sectors. The same idea can be extended to point sets in more than two dimensions, but the number of sectors required grows exponentially with the dimension.
Andrew Yao used these graphs to construct high-dimensional Euclidean minimum spanning trees.
Software for drawing Yao graphs
Cone-based Spanners in Computational Geometry Algorithms Library (CGAL)
See also
Theta graph
Semi-Yao graph
References
Computational geometry
Geometric graph theory | Yao graph | Mathematics | 237 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.