id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
8,561,726
https://en.wikipedia.org/wiki/The%20Design%20of%20Everyday%20Things
The Design of Everyday Things is a best-selling book by cognitive scientist and usability engineer Donald Norman. Originally published in 1988 with the title The Psychology of Everyday Things, it is often referred to by the initialisms POET and DOET. A new preface was added in 2002 and a revised and expanded edition was published in 2013. The book's premise is that design serves as the communication between object and user, and discusses how to optimize that conduit of communication in order to make the experience of using the object pleasurable. It argues that although people are often keen to blame themselves when objects appear to malfunction, it is not the fault of the user but rather the lack of intuitive guidance that should be present in the design. Norman uses case studies to describe the psychology behind what he deems good and bad design, and proposes design principles. The book spans several disciplines including behavioral psychology, ergonomics, and design practice. Contents In the book, Norman introduced the term affordance as it applied to design, borrowing James J. Gibson's concept from ecological psychology. In the revised edition of his book in 2013, he also introduced the concept of signifiers to clarify his definition of affordances. Examples of affordances are doors that can be pushed or pulled. These are the possible interactions between an object and its user. Examples of corresponding signifiers are flat plates on doors meant to be pushed, small finger-size push-buttons, and long and rounded bars we intuitively use as handles. As Norman used the term, a door affords pushing or pulling, and the plate or button signals that it is meant to be pushed, while the bar or handle signals pulling. Norman discussed door handles at length. He also popularized the term user-centered design, which he had previously referred to in User-Centered System Design in 1986. He used the term to describe design based on the needs of the user, leaving aside, what he deemed secondary issues like aesthetics. User-centered design involves simplifying the structure of tasks, making things visible, getting the mapping right, exploiting the powers of constraint, designing for error, explaining affordances, and seven stages of action. He went to great lengths to define and explain these terms in detail, giving examples following and going against the advice given and pointing out the consequences. Other topics of the book include: The Psychopathology of Everyday Things The Psychology of Everyday Actions Knowledge in the Head and in the World Knowing What to Do To Err Is Human Human-Centered Design The Design Challenge Seven stages of action Seven stages of action are described in chapter two of the book. They include four stages of execution, three stages of evaluation: Forming the target Forming the intention Specifying an action Executing the action Perceiving the state of the world Interpreting the state of the world Evaluating the outcome Building up the Stages The history behind the action cycle starts from a conference in Italy attended by Donald Norman. This excerpt has been taken from the book The Design of Everyday Things: I am in Italy at a conference. I watch the next speaker attempt to thread a film onto a projector that he never used before. He puts the reel into place, then takes it off and reverses it. Another person comes to help. Jointly they thread the film through the projector and hold the free end, discussing how to put it on the takeup reel. Two more people come over to help and then another. The voices grow louder, in three languages: Italian, German and English. One person investigates the controls, manipulating each and announcing the result. Confusion mounts. I can no longer observe all that is happening. The conference organizer comes over. After a few moments he turns and faces the audience, who had been waiting patiently in the auditorium. "Ahem," he says, "is anybody expert in projectors?" Finally, fourteen minutes after the speaker had started to thread the film (and eight minutes after the scheduled start of the session) a blue-coated technician appears. He scowls, then promptly takes the entire film off the projector, rethreads it, and gets it working. Norman pondered on the reasons that made something like threading of a projector difficult to do. To examine this, he wanted to know what happened when something implied nothing. In order to do that, he examined the structure of an action. So to get something done, a notion of what is wanted – the goal that is to be achieved, needs to be started. Then, something is done to the world i.e. take action to move oneself or manipulate someone or something. Finally, the checking is required if the goal was made. This led to formulation of Stages of Execution and Evaluation. Stages of Execution Execution formally means to perform or do something. Norman explains that a person sitting on an armchair while reading a book at dusk, might need more light when it becomes dimmer and dimmer. To do that, he needs to switch on the button of a lamp i.e. get more light (the goal). To do this, one must need to specify on how to move one's body, how to stretch to reach the light switch and how to extend one's finger to push the button. The goal has to be translated into an intention, which in turn has to be made into an action sequence. Thus, formulation of stages of execution: Start at the top with the goal, the state that is to be achieved. The goal is translated into an intention to do some action. The intention must be translated into a set of internal commands, an action sequence that can be performed to satisfy the intention. The action sequence is still a mutual event: nothing happens until it is executed, performed upon the world. Stages of Evaluation Evaluation formally means to examine and calculate. Norman explains that after turning on the light, we evaluate if it is actually turned on. A careful judgement is then passed on how the light has affected our world i.e. the room in which the person is sitting on the armchair while reading a book. The formulation of the stages of evaluation can be described as: Evaluation starts with our perception of the world. This perception must then be interpreted according to our expectations. Then it is compared (evaluated) with respect to both our intentions and our goals. Gulf of execution The difference between the intentions and the allowable actions is the gulf of execution. "Consider the movie projector example: one problem resulted from the Gulf of Execution. The person wanted to set up the projector. Ideally, this would be a simple thing to do. But no, a long, complex sequence was required. It wasn't all clear what actions had to be done to accomplish the intentions of setting up the projector and showing the film." In the gulf of execution is the gap between a user's goal for action and the means to execute that goal. Usability has as one of its primary goals to reduce this gap by removing roadblocks and steps that cause extra thinking and actions that distract the user's attention from the task intended, thereby preventing the flow of his or her work, and decreasing the chance of successful completion of the task. This can be illustrated through the discussion of a VCR problem. Let us imagine that a user would like to record a television show. They see the solution to this problem as simply pressing the Record button. However, in reality, to record a show on a VCR, several actions must be taken: Press the record button. Specify time of recording, usually involving several steps to change the hour and minute settings. Select channel to record on - either by entering the channel's number or selecting it with up/down buttons. Save the recording settings, perhaps by pressing an "OK" or "menu" or "enter" button. The difference between the user's perceived execution actions and the required actions is the gulf of execution. Gulf of evaluation The gulf of evaluation reflects the amount of effort that the person must exert to interpret the physical state of the system and to determine how well the expectations and intentions have been met. In the gulf of evaluation is the degree to which the system or artifact provides representations that can be directly perceived and interpreted in terms of the expectations and intentions of the user. Or put differently, the gulf of evaluation is the difficulty of assessing the state of the system and how well the artifact supports the discovery and interpretation of that state. In the book, "The gulf is small when the system provides information about its state in a form that is easy to get, is easy to interpret, and matches the way the person thinks of the system". "In the movie projector example there was also a problem with the Gulf of Evaluation. Even when the film was in the projector, it was difficult to tell if it had been threaded correctly." The gulf of evaluation applies to the gap between an external stimulus and the time a person understands what it means. The gulf of evaluation stands for the psychological gap that must be crossed to interpret a user interface display, following the steps: interface → perception → interpretation → evaluation. Both "gulfs" were first mentioned in Donald Norman's 1986 book User Centered System Design: New Perspectives on Human-computer Interaction. Usage as Design Aids The seven-stage structure is referenced as design aid to act as a basic checklist for designers' questions to ensure that the Gulfs of Execution and Evaluation are bridged. The Seven Stages of relationship can be broken down into 4 main principles of good design: Visibility - By looking, the user can tell the state of the device and the alternatives for action. A Good Conceptual Model - The designer provides a good conceptual model for the user, with consistency in the presentation of operations and results and a coherent, consistent system image. Good mappings - It is possible to determine the relationships between actions and results, between the controls and their effects, and between the system state and what is visible. Feedback - The user receives full and continuous feedback about the results of the actions. Reception After a group of industrial designers felt affronted after reading an early draft, Norman rewrote the book to make it more sympathetic to the profession. The book was originally published with the title The Psychology of Everyday Things. In his preface to the 2002 edition, Norman has stated that his academic peers liked the original title, but believed the new title better conveyed the content of the book and better attracted interested readers. See also Emotional Design Seven stages of action User-centered design Industrial design Interaction design Principles of user interface design References Further reading Books about cognition Industrial design Business books 1988 non-fiction books
The Design of Everyday Things
[ "Engineering" ]
2,157
[ "Industrial design", "Design engineering", "Design" ]
8,562,026
https://en.wikipedia.org/wiki/Electroluminescent%20wire
Electroluminescent wire (often abbreviated as EL wire) is a thin copper wire coated in a phosphor that produces light through electroluminescence when an alternating current is applied to it. It can be used in a wide variety of applications—vehicle and structure decoration, safety and emergency lighting, toys, clothing etc.—much as rope light or Christmas lights are often used. Unlike these types of strand lights, EL wire is not a series of points, but produces a continuous unbroken line of visible light. Its thin diameter makes it flexible and ideal for use in a variety of applications such as clothing or costumes. Structure EL wire's construction consists of five major components. First is a solid-copper wire core coated with phosphor. A very fine wire or pair of wires is spiral-wound around the phosphor-coated copper core and then the outer Indium tin oxide (ITO) conductive coating is evaporated on. This fine wire is electrically isolated from the copper core. Surrounding this "sandwich" of copper core, phosphor and fine copper wire is a clear PVC sleeve. Finally, surrounding this thin and clear PVC sleeve is another clear, colored translucent or fluorescent PVC sleeve. An alternating current electric potential of approximately 90 to 120 volts at about 1000 Hz is applied between the copper core wire and the fine wire that surrounds the copper core. The wire can be modeled as a coaxial capacitor with about 1 nF of capacitance per 30 cm, and the rapid charging and discharging of this capacitor excites the phosphor to emit light. The colors of light that can be produced efficiently by phosphors are limited, so many types of wire use an additional fluorescent organic dye in the clear PVC sleeve to produce the final result. These organic dyes produce colors like red and purple when excited by the blue-green light of the core. A resonant oscillator is typically used to generate the high voltage drive signal. Because of the capacitance load of the EL wire, using an inductive (coiled) transformer makes the driver a very efficient tuned LC oscillator. The efficiency of EL wire is very high, and thus up to a hundred meters of EL wire can be driven by AA batteries for several hours. In recent years, the LC circuit has been replaced for some applications with a single chip switched capacitor inverter IC such as the Supertex HV850; this can run 30 cm of angel hair wire at high efficiency, and is suitable for solar lanterns and safety applications. The other advantage of these chips is that the control signals can be derived from a microcontroller, so brightness and colour can be varied programmatically; this can be controlled by using external sensors that sense, for example, battery state, ambient temperature, or ambient light etc. EL wire - in common with other types of EL devices - does have limitations: at high frequency it dissipates a lot of heat, and that can lead to breakdown and loss of brightness over time. Because the wire is unshielded and typically operates at a relatively high voltage, EL wire can produce high-frequency interference (corresponding to the frequency of the oscillator) that can be picked up by sensitive audio equipment, such as guitar pickups. There is also a voltage limit: typical EL wire breaks down at around 180 volts peak-to-peak, so if using an unregulated transformer, back-to-back zener diodes and series current-limiting resistors are essential. In addition, EL sheet and wire can sometimes be used as a touch sensor, since compressing the capacitor will change its value. Sequencers EL wire sequencers can flash electroluminescent wire, or EL wire, in sequential patterns. EL wire requires a low-power, high-frequency driver to cause the wire to illuminate. Most EL wire drivers simply light up one strand of EL wire in a constant-on mode, and some drivers may additionally have a blink or strobe mode. A sound-activated driver will light EL wire in synchronization to music, speech, or other ambient sound, but an EL wire sequencer will allow multiple lengths of EL wire to be flashed in a desired sequence. The lengths of EL wire can all be the same color, or a variety of colors. The images above show a sign that displays a telephone number, where the numbers were formed using different colors of EL wire. There are ten numbers, each of which is connected to a different channel of the EL wire sequencer. Like EL wire drivers, sequencers are rated to drive (or power) a range or specific length of EL wire. For example, using a sequencer rated for 1.5 to 14 meters (5 to 45 feet), if less than 1.5m is used, there is a risk of burning out the sequencer, and if more than 14m is used, the EL wire will not light as brightly as intended. There are commercially available EL wire sequencers capable of lighting three, four, five, or ten lengths of EL wire. There are professional and experimental sequencers with many more than ten channels, but for most applications, ten channels is enough. Sequencers usually have options for changing the speed, reversing, changing the order of the sequence, and sometimes for changing whether the first wires remain lit or go off as the rest of the wires in the sequence are lit. EL wire sequencers tend to be smaller than a pack of cigarettes and most are powered by batteries. This versatility lends to the sequencers' use at nighttime events where mains electricity is not available. Applications By arranging each strand of EL wire into a shape slightly different from the previous one, it is possible to create animations using EL wire sequencers. EL wire sequencers are also used for costumes and have been used to create animations on various items such as kimono, purses, neckties, and motorcycle tanks. They are increasingly popular among artists, dancers, maker culture, and similar creative communities, such as exhibited in the annual Burning Man alt-culture festival. References 5,753,381 US Patent, Electroluminescent Filament Notes External links How Electroluminescent (EL) Wire Works, by Joanna Burgess // How Stuff Works Display technology Lighting Luminescence Wire
Electroluminescent wire
[ "Chemistry", "Engineering" ]
1,328
[ "Electronic engineering", "Luminescence", "Molecular physics", "Display technology" ]
8,562,929
https://en.wikipedia.org/wiki/Virtual%20newscaster
A virtual newscaster, or also called a virtual host, virtual presenter, virtual teleprompter or virtual anchor is a computer-generated character created for the purpose of reading forth news from a website. While Ananova is often credited to be the first virtual newscaster on the web, it went off-line in 2004. Delta Seven, created by Bruce C. Pippin, uses Microsoft Agent technology to deliver real-time changes in news, weather, sports and stock market quotes in less than seven minutes. Advantages of having such a character on a website include that it has a more familiar effect on viewers. Also, as newscasters are typically coupled with an audio reading of any article they are featured on, visually impaired or illiterate persons can benefit from this method of information delivery. External links Delta Seven Computer animation
Virtual newscaster
[ "Technology" ]
168
[ "Computing stubs", "World Wide Web stubs" ]
8,562,999
https://en.wikipedia.org/wiki/Rippling
In computer science, more particularly in automated theorem proving, rippling refers to a group of meta-level heuristics, developed primarily in the Mathematical Reasoning Group in the School of Informatics at the University of Edinburgh, and most commonly used to guide inductive proofs in automated theorem proving systems. Rippling may be viewed as a restricted form of rewrite system, where special object level annotations are used to ensure fertilization upon the completion of rewriting, with a measure decreasing requirement ensuring termination for any set of rewrite rules and expression. History Raymond Aubin was the first person to use the term "rippling out" whilst working on his 1976 PhD thesis at the University of Edinburgh. He recognised a common pattern of movement during the rewriting stage of inductive proofs. Alan Bundy later turned this concept on its head by defining rippling to be this pattern of movement, rather than a side effect. Since then, "rippling sideways", "rippling in" and "rippling past" were coined, so the term was generalised to rippling. Rippling continues to be developed at Edinburgh, and elsewhere, as of 2007. Rippling has been applied to many problems traditionally viewed as being hard in the inductive theorem proving community, including Bledsoe's limit theorems and a proof of the Gordon microprocessor, a miniature computer developed by Michael J. C. Gordon and his team at Cambridge. Overview Very often, when attempting to prove a proposition, we are given a source expression and a target expression, which differ only by the inclusion of a few extra syntactic elements. This is especially true in inductive proofs, where the given expression is taken to be the inductive hypothesis, and the target expression the inductive conclusion. Usually, the differences between the hypothesis and conclusion are only minor, perhaps the inclusion of a successor function (e.g., +1) around the induction variable. At the start of rippling the differences between the two expressions, known as wave-fronts in rippling parlance, are identified. Typically these differences prevent the completion of the proof and need to be "moved away". The target expression is annotated to distinguish the wavefronts (differences) and skeleton (common structure) between the two expressions. Special rules, called wave rules, can then be used in a terminating fashion to manipulate the target expression until the source expression can be used to complete the proof. Example We aim to show that the addition of natural numbers is commutative. This is an elementary property, and the proof is by routine induction. Nevertheless, the search space for finding such a proof may become quite large. Typically, the base case of any inductive proof is solved by methods other than rippling. For this reason, we will concentrate on the step case. Our step case takes the following form, where we have chosen to use x as the induction variable: We may also possess several rewrite rules, drawn from lemmas, inductive definitions or elsewhere, that can be used to form wave-rules. Suppose we have the following three rewrite rules: then these can be annotated, to form: Note that all these annotated rules preserve the skeleton (x + y = y + x, in the first case and x + y in the second/third). Now, annotating the inductive step case, gives us: And we are all set to perform rippling: Note that the final rewrite causes all wave-fronts to disappear, and we may now apply fertilization, the application of the inductive hypotheses, to complete the proof. References Further reading Heuristics Automated theorem proving
Rippling
[ "Mathematics" ]
770
[ "Mathematical logic", "Computational mathematics", "Automated theorem proving" ]
8,563,310
https://en.wikipedia.org/wiki/Fenchone
Fenchone is an organic compound classified as a monoterpenoid and a ketone. It is a colorless oily liquid. It has a structure and an odor similar to those of camphor. Fenchone is a constituent of absinthe and the essential oil of fennel. Fenchone is used as a flavor in foods and in perfumery. Other names for fenchone include dl-fenchone and (±)-fenchone. It is a mixture of the enantiomers d-fenchone and l-fenchone. Other names for d-fenchone include (+)-fenchone and (1S,4R)-fenchone. Other names for l-fenchone include (−)-fenchone and (1R,4S)-fenchone. The d-fenchone enantiomer occurs in pure form in wild, bitter and sweet fennel plants and seeds, whereas the l-fenchone enantiomer occurs in pure form in wormwood, tansy, and cedarleaf. References Absinthe Ketones Monoterpenes Norbornanes
Fenchone
[ "Chemistry" ]
244
[ "Ketones", "Functional groups" ]
8,563,626
https://en.wikipedia.org/wiki/Stockholm%20Environment%20Institute
Stockholm Environment Institute, or SEI, is a non-profit, independent research and policy institute specialising in sustainable development and environmental issues, with seven affiliate offices around the world. SEI works on climate change, energy systems, water resources, air quality, land-use, sanitation, food security, and trade issues with the aim to shift policy and practice towards sustainability. SEI wants to support decision-making and induce change towards sustainable development around the world by providing knowledge that bridges science and policy in the field of environment and development. History SEI was established in 1989 as an initiative of the Government of Sweden. Activities Programs Ecological Sanitation Research Programme LEAP: Low Emissions Analysis Platform Regional Air Pollution In Developing Countries (RAPDIC) Resources and Energy Analysis Programme (REAP) SIANI Swedish International Agriculture Network Initiative (siani.se) Sustainable Mekong Research Network Programme (SUMERNET) TRASE Transparent supply chains for sustainable economies weADAPT WEAP: Water Evaluation And Planning System Partnerships SEI was one of the organizations who founded the Sustainable Sanitation Alliance in 2007 together with the German Development Organization (GIZ) Organizational structure Executive Directors 1989–1990 Gordon T. Goodman 1991–1995 Michael J. Chadwick 1996–1999 Nicholas C. Sonntag 2000 Bert Bolin (interim Executive Director) 2000 Lars Nilsson (interim Executive Director) 2000–2004 Roger Kasperson 2004–2012 Johan Rockström 2012–2018 Johan L. Kuylenstierna 2018–present Måns Nilsson (Executive Director) Centres SEI operates in seven countries: Sweden, United States (Stockholm Environment Institute US Center, United Kingdom, Estonia, Thailand, Kenya, and Colombia. Funding sources The Swedish International Development Cooperation Agency (Sida) is SEI's main donor. SEI also receives funding from development agencies, governments, NGOs, universities, businesses, and financial institutions. For example, the Bill and Melinda Gates Foundation also provides funds to SEI in the area of maternal health and in sustainable sanitation. At the SEI Science Forum in 2015, Melinda Gates took part to discuss sustainability and gender together with SEI staff to help shape SEI's future research. References External links Official website Environmental research institutes Environmentalism in Sweden International research institutes Research institutes in Sweden Sustainability organizations Organizations established in 1989 1989 establishments in Sweden Organizations based in Stockholm
Stockholm Environment Institute
[ "Environmental_science" ]
467
[ "Environmental research institutes", "Environmental research" ]
8,563,704
https://en.wikipedia.org/wiki/Antistatic%20device
An antistatic device is any device that reduces, dampens, or otherwise inhibits electrostatic discharge, or ESD, which is the buildup or discharge of static electricity. ESD can damage electrical components such as computer hard drives, and even ignite flammable liquids and gases. Many methods exist for neutralizing static electricity, varying in use and effectiveness depending on the application. Antistatic agents are chemical compounds that can be added to an object, or the packaging of an object, to help deter the buildup or discharge of static electricity. For the neutralization of static charge in a larger area, such as a factory floor, semiconductor cleanroom or workshop, antistatic systems may utilize electron emission effects such as corona discharge or photoemission that introduce ions into the area that combine with and neutralize any electrically charged object. In many situations, sufficient ESD protection can be achieved with electrical grounding. Symbology Various symbols can be found on products, indicating that the product is electrostatically sensitive, as with sensitive electrical components, or that it offers antistatic protection, as with antistatic bags. Reach symbol ANSI/ESD standard S8.1-2007 is most commonly seen on applications related to electronics. Several variations consist of a triangle with a reaching hand depicted inside of it using negative space. Versions of the symbol will often have the hand being crossed out as a warning for the component being protected, indicating that it is ESD sensitive and is not to be touched unless antistatic precautions are taken. Another version of the symbol has the triangle surrounded by an arc. This variant is in reference to the antistatic protective device, such as an antistatic wrist strap, rather than the component being protected. It usually does not feature the hand being crossed out, indicating that it makes contact with the component safe. Circle Another common symbol takes the form of a bold circle being intersected by three arrows. Originating from a U.S. military standard, it has been adopted industry-wide. It is intended as a depiction of a device or component being breached by static charges, indicated by the arrows. Examples Types of antistatic devices include: Antistatic bag An antistatic bag is a bag used for storing or shipping electronic components which may be prone to damage caused by ESD. Ionizing bar An ionizing bar, sometimes referred to as a static bar, is a type of industrial equipment used for removing static electricity from a production line to dissipate static cling and other such phenomena that would disrupt the line. It is important in the manufacturing and printing industries, although it can be used in other applications as well. Ionizing bars are most commonly suspended above a conveyor belt or other apparatus in a production line where the product can pass below it; the distance is usually calibrated for the specific application. The bar works by emitting an ionized corona onto the products below it. If then a product on the line has a positive or negative static charge, as it passes through the ionized aura created by the bar, it will attract the correspondingly charged positive or negative ions and become electrically neutral. Antistatic garments Antistatic garments or antistatic clothing can be used to prevent damage to electrical components or to prevent fires and explosions when working with flammable liquids and gases. Antistatic garments are used in many industries such as electronics, communications, telecommunications and defense applications. Antistatic garments have conductive threads in them, creating a wearable version of a Faraday cage. Antistatic garments attempt to shield ESD sensitive devices from harmful static charges from clothing such as wool, silk, and synthetic fabrics on people working with them. For these garments to work properly, they must also be connected to ground with a strap. Most garments are not conductive enough to provide personal grounding, so antistatic wrist and foot straps are also worn. There are three types of static control garments that are compliant to the ANSI/ESD S20.20-2014 standards: 1) static control garment, 2) groundable static control garment, 3) groundable static control garment system. Antistatic mat An antistatic floor mat or ground mat is one of a number of antistatic devices designed to help eliminate static electricity. It does this by having a controlled low resistance: a metal mat would keep parts grounded but would short out exposed parts; an insulating mat would provide no ground reference and so would not provide grounding. Typical resistance is on the order of 105 to 108 ohms between points on the mat and to ground. The mat would need to be grounded (earthed). This is usually accomplished by plugging into the grounded line in an electrical outlet. It is important to discharge at a slow rate, therefore a resistor should be used in grounding the mat. The resistor, as well as allowing high-voltage charges to leak through to ground, also prevents a shock hazard when working with low-voltage parts. Some ground mats allow one to connect an antistatic wrist strap to them. Versions are designed for placement on both the floor and desk. Antistatic wrist strap An antistatic wrist strap, ESD wrist strap, or ground bracelet is an antistatic device used to safely ground a person working on very sensitive electronic equipment, to prevent the buildup of static electricity on their body, which can result in ESD. It is used in the electronics industry when handling electronic devices which can be damaged by ESD, and also sometimes by people working around explosives, to prevent electric sparks which could set off an explosion. It consists of an elastic band of fabric with fine conductive fibers woven into it, attached to a wire with a clip on the end to connect it to a ground conductor. The fibers are usually made of carbon or carbon-filled rubber, and the strap is bound with a stainless steel clasp or plate. They are usually used in conjunction with an antistatic mat on the workbench, or a special static-dissipating plastic laminate on the workbench surface. The wrist strap is usually worn on the nondominant hand (the left wrist for a right-handed person). It is connected to ground through a coiled retractable cable and 1 megohm resistor, which allows high-voltage charges to leak through but prevents a shock hazard when working with low-voltage parts. Where higher voltages are present, extra resistance (0.75 megohm per 250 V) is added in the path to ground to protect the wearer from excessive currents; this typically takes the form of a 4 megohm resistor in the coiled cable (or, more commonly, a 2 megohm resistor at each end). Wrist straps designed for industrial use usually connect to ground connections built into the workplace, via either a standard 4 mm plug or 10 mm press stud, whereas straps designed for consumer use often have a crocodile clip for the ground connection. In addition to wrist straps, ankle and heel straps are used in industry to bleed away accumulated charge from a body. These devices are usually not tethered to earth ground, but instead incorporate high resistance in their construction, and work by dissipating electrical charge to special floor tiles. Such straps are used when workers need to be mobile in a work area and a grounding cable would get in the way. They are used particularly in an operating theatre, where oxygen or explosive anesthetic gases are used. Some wrist straps are "wireless" or "dissipative", and claim to protect against ESD without needing a ground wire, typically by air ionization or corona discharge. These are widely regarded as ineffective, if not fraudulent, and examples have been tested and shown not to work. Professional ESD standards all require wired wrist straps. See also Electrostatic-sensitive device Antistatic agent Electrostatics Bleeder resistor References Electrostatics Digital electronics Electrical safety
Antistatic device
[ "Engineering" ]
1,598
[ "Electronic engineering", "Digital electronics" ]
8,563,981
https://en.wikipedia.org/wiki/OMDoc
OMDoc (Open Mathematical Documents) is a semantic markup format for mathematical documents. While MathML only covers mathematical formulae and the related OpenMath standard only supports formulae and “content dictionaries” containing definitions of the symbols used in formulae, OMDoc covers the whole range of written mathematics. Coverage OMDoc allows for mathematical expressions on three levels: Object levelFormulae, written in Content MathML (the non-presentational subset of MathML), OpenMath or languages for mathematical logic. Statement levelDefinitions, theorems, proofs, examples and the relations between them (e.g. “this proof proves that theorem”). Theory levelA theory is a set of contextually related statements. Theories may import each other, thereby forming a graph. Seen as collections of symbol definitions, OMDoc theories are compatible to OpenMath content dictionaries. On each level, formal syntax and informal natural language can be used, depending on the application. Semantics and Presentation OMDoc is a semantic markup language that allows writing down the meaning of texts about mathematics. In contrast to LaTeX, for example, it is not primarily presentation-oriented. An OMDoc document need not specify what its contents should look like. A conversion to LaTeX and XHTML (with Presentation MathML for the formulae) is possible, though. To this end, the presentation of each symbol can be defined. Applications Today, OMDoc is used in the following settings: E-learningCreation of customized textbooks. Data exchangeOMDoc import and export modules are available for many automated theorem provers and computer algebra systems. OMDoc is intended to be used for communication between mathematical web services. Document preparationDocuments about mathematics can be prepared in OMDoc and later exported to a presentation-oriented format like LaTeX or XHTML+MathML. History OMDoc has been developed by the German mathematician and computer scientist Michael Kohlhase since 1998. So far, there have been the following releases: 1.0 (November 2000) 1.1 (December 2001) 1.2 (July 2006) Future developments It is planned to create the infrastructure for a “semantic web for technology and science” based on OMDoc. To this end, OMDoc is being extended towards sciences other than mathematics. The first result is PhysML, an OMDoc variant extended towards physics. For a better integration with other Semantic Web applications, an OWL ontology of OMDoc is under development, as well as an export facility to RDF. See also Mathematical knowledge management References Michael Kohlhase (2006): An Open Markup Format for Mathematical Documents (Version 1.2). Lecture Notes in Artificial Intelligence, no. 4180. Springer Verlag, Heidelberg. . External links Wiki for OMDoc and related projects Markup languages Mathematical markup languages Semantic Web XML-based standards
OMDoc
[ "Mathematics", "Technology" ]
608
[ "Computer standards", "Mathematical markup languages", "XML-based standards" ]
8,564,033
https://en.wikipedia.org/wiki/Ronchi%20test
In optical testing a Ronchi test is a method of determining the surface shape (figure) of a mirror used in telescopes and other optical devices. Description In 1923 Italian physicist Vasco Ronchi published a description of the eponymous Ronchi test, which is a variation of the Foucault knife-edge test and which uses simple equipment to test the quality of optics, especially concave mirrors. . A "Ronchi tester" consists of: A light source A diffuser A Ronchi grating A Ronchi grating consists of alternate dark and clear stripes. One design is a small frame with several evenly spaced fine wires attached. Light is emitted through the Ronchi grating (or a single slit), reflected by the mirror being tested, then passes through the Ronchi grating again and is observed by the person doing the test. The observer's eye is placed close to the centre of curvature of the mirror under test looking at the mirror through the grating. The Ronchi grating is a short distance (less than 2 cm) closer to the mirror. The observer sees the mirror covered in a pattern of stripes that reveal the shape of the mirror. The pattern is compared to a mathematically generated diagram (usually done on a computer today) of what it should look like for a given figure. Inputs to the program are line frequency of the Ronchi grating, focal length and diameter of the mirror, and the figure required. If the mirror is spherical, the pattern consists of straight lines. Applications The Ronchi test is used in the testing of mirrors for reflecting telescopes especially in the field of amateur telescope making. It is much faster to set up than the standard Foucault knife-edge test. The Ronchi test differs from the knife-edge test, requiring a specialized target (the Ronchi grating, which amounts to a periodic series of knife edges) and being more difficult to interpret. This procedure offers a quick evaluation of the mirror's shape and condition. It readily identifies a 'turned edge' (rolled down outer diameter of the mirror), a common fault that can develop in objective mirror making. The figure quality of a convex lens may be visually tested using a similar principle. The grating is moved around the focal point of the lens while viewing the virtual image through the opposite side. Distortions in the lens surface figure then appear as asymmetries in the periodic grating image. Footnotes References Scienceworld - Wolfram.com - Ronchi test The ATM's Workshop Matching Ronchi Test Optics Telescopes
Ronchi test
[ "Physics", "Chemistry", "Astronomy" ]
518
[ "Applied and interdisciplinary physics", "Optics", "Telescopes", " molecular", "Astronomical instruments", "Atomic", " and optical physics" ]
8,564,378
https://en.wikipedia.org/wiki/Genesi
Genesi is an international group of technology and consulting companies in the United States, Mexico and Germany. It is most widely known for designing and manufacturing ARM architecture and Power ISA-based computing devices. The Genesi Group consists of Genesi USA Inc., Genesi Americas LLC, Genesi Europe UG, Red Efika, bPlan GmbH and the affiliated non-profit organization Power2People. Genesi is an official Linaro partner and its software development team has been instrumental in moving Linux on the ARM architecture towards a wider adoption of the hard-float application binary interface, which is incompatible with most existing applications but provides enormous performance gains for many use cases. Products The main products of Genesi are ARM-based computers that were designed to be inexpensive, quiet and highly energy efficient, and a custom Open Firmware compliant firmware. All products can run a multitude of operating systems. Current products Aura - A comprehensive abstraction layer for embedded and desktop devices, with UEFI and IEEE1275. Desktop systems with AGP or PCI/PCI Express may take advantage of an embedded x86/BIOS emulator providing boot functionality for standard graphics cards. EFIKA MX53 EFIKA MX6 Discontinued products EFIKA MX Smarttop - A highly energy efficient and compact computing device (complete system) powered by a Freescale ARM iMX515 CPU. EFIKA MX Smartbook - A 10" smartbook (complete system) powered by the Freescale ARM iMX515 CPU. High Density Blade - PowerPC based high density blade server. Home Media Center - PowerPC based digital video recorder. EFIKA 5200B - A small Open Firmware-based motherboard powered by a Freescale MPC5200B SoC processor with 128 MB RAM, a 44-pin ATA connector for a 2.5" hard drive, sound in/out, USB, Ethernet, serial port, and a PCI slot. Open Client - thin clients available with Freescale's Power Architecture or ARM SoCs. Pegasos - An Open Firmware-based MicroATX motherboard powered by a PowerPC G3/G4 microprocessor, featuring PCI slots, AGP, Ethernet, USB, DDR and FireWire. Open Desktop Workstation – A Pegasos II based computer featuring a Freescale PowerPC 7447 processor. Complete specifications for the hardware are available through Genesi's PowerDeveloper.org website. Community support Genesi designed and maintains PowerDeveloper, an online platform for Genesi products and ARM products from other manufacturers. Via the PowerDeveloper Projects programs, hundreds of systems have been provided to the PowerDeveloper community so far, thereby supporting open source development in many countries. Linux distributions that directly benefited from the programs include but are not limited to Crux, Debian, Raspbian, Fedora, Gentoo, openSuSE and Ubuntu. Genesi once funded the development of the MorphOS operating system but shifted its focus towards Linux in 2004. However, Genesi remains the main supporter of the operating system and continues to actively support its user and developer communities via the MorphZone social platform, which features discussion forums, a digital library, a software repository and a bounty system. External links Genesi USA Inc. Power2People PowerDeveloper MorphZone Genesi Group Genesi Americas Red Efika bplan Notes ARM architecture Computer companies of the United States Computer hardware companies Amiga companies
Genesi
[ "Technology" ]
723
[ "Computer hardware companies", "Computers" ]
8,564,483
https://en.wikipedia.org/wiki/Bogdanov%E2%80%93Takens%20bifurcation
In bifurcation theory, a field within mathematics, a Bogdanov–Takens bifurcation is a well-studied example of a bifurcation with co-dimension two, meaning that two parameters must be varied for the bifurcation to occur. It is named after Rifkat Bogdanov and Floris Takens, who independently and simultaneously described this bifurcation. A system y''' = f(y) undergoes a Bogdanov–Takens bifurcation if it has a fixed point and the linearization of f around that point has a double eigenvalue at zero (assuming that some technical nondegeneracy conditions are satisfied). Three codimension-one bifurcations occur nearby: a saddle-node bifurcation, an Andronov–Hopf bifurcation and a homoclinic bifurcation. All associated bifurcation curves meet at the Bogdanov–Takens bifurcation. The normal form of the Bogdanov–Takens bifurcation is There exist two codimension-three degenerate Takens–Bogdanov bifurcations, also known as Dumortier–Roussarie–Sotomayor bifurcations. References Bogdanov, R. "Bifurcations of a Limit Cycle for a Family of Vector Fields on the Plane." Selecta Math. Soviet 1, 373–388, 1981. Kuznetsov, Y. A. Elements of Applied Bifurcation Theory. New York: Springer-Verlag, 1995. Takens, F. "Forced Oscillations and Bifurcations." Comm. Math. Inst. Rijksuniv. Utrecht 2, 1–111, 1974. Dumortier F., Roussarie R., Sotomayor J. and Zoladek H., Bifurcations of Planar Vector Fields'', Lecture Notes in Math. vol. 1480, 1–164, Springer-Verlag (1991). External links Bifurcation theory
Bogdanov–Takens bifurcation
[ "Mathematics" ]
435
[ "Bifurcation theory", "Dynamical systems" ]
8,564,970
https://en.wikipedia.org/wiki/Landau%E2%80%93Kolmogorov%20inequality
In mathematics, the Landau–Kolmogorov inequality, named after Edmund Landau and Andrey Kolmogorov, is the following family of interpolation inequalities between different derivatives of a function f defined on a subset T of the real numbers: On the real line For k = 1, n = 2 and T = [c,∞) or T = R, the inequality was first proved by Edmund Landau with the sharp constants C(2, 1, [c,∞)) = 2 and C(2, 1, R) = √2. Following contributions by Jacques Hadamard and Georgiy Shilov, Andrey Kolmogorov found the sharp constants and arbitrary n, k: where an are the Favard constants. On the half-line Following work by Matorin and others, the extremising functions were found by Isaac Jacob Schoenberg, explicit forms for the sharp constants are however still unknown. Generalisations There are many generalisations, which are of the form Here all three norms can be different from each other (from L1 to L∞, with p=q=r=∞ in the classical case) and T may be the real axis, semiaxis or a closed segment. The Kallman–Rota inequality generalizes the Landau–Kolmogorov inequalities from the derivative operator to more general contractions on Banach spaces. Notes Inequalities →
Landau–Kolmogorov inequality
[ "Mathematics" ]
300
[ "Binary relations", "Mathematical relations", "Inequalities (mathematics)", "Mathematical problems", "Mathematical theorems" ]
8,565,075
https://en.wikipedia.org/wiki/Bruno%20Th%C3%BCring
Bruno Jakob Thüring (7 September 1905, in Warmensteinach – 6 May 1989, in Karlsruhe) was a German physicist and astronomer. Life and career Thüring studied mathematics, physics, and astronomy at the University of Munich and received his doctorate in 1928, under Alexander Wilkens. Wilkens was a professor of astronomy and director of the Munich Observatory, which was part of the University. From 1928 to 1933, Thüring was an assistant at the Munich Observatory. From 1934 to 1935, he was an assistant to Heinrich Vogt at the University of Heidelberg. Thüring completed his Habilitation there in 1935, whereupon he became an Observator at the Munich Observatory. In 1937, Thüring became a lecturer (Dozent) at the University of Munich. From 1940 to 1945, he held the chair for astronomy at the University of Vienna and was director of the Vienna Observatory. After 1945, Thüring lived as a private scholar in Karlsruhe. During the reign of Adolf Hitler, Thüring was a proponent of Deutsche Physik, as were the two Nobel Prize-winning physicists Johannes Stark and Philipp Lenard; Deutsche Physik, was anti-Semitic and had a bias against theoretical physics, especially quantum mechanics. He was also a student of the philosophy of Hugo Dingler. Thüring was an opponent of Albert Einstein's theory of relativity. Books Bruno Thüring (Georg Lüttke Verlag, 1941) Bruno Thüring (Göller, 1957) Bruno Thüring (Göller, 1958) Bruno Thüring (Duncker u. Humblot GmbH, 1967) Bruno Thüring (Duncker & Humblot GmbH, 1978) Bruno Thüring (Haag u. Herchen, 1985) Notes References Clark, Ronald W. Einstein: The Life and Times (World, 1971) 1905 births 1989 deaths 20th-century German physicists 20th-century German astronomers Academic staff of Heidelberg University Ludwig Maximilian University of Munich alumni Academic staff of the Ludwig Maximilian University of Munich Scientists from the Kingdom of Bavaria Relativity critics Science teachers Academic staff of the University of Vienna
Bruno Thüring
[ "Physics" ]
421
[ "Relativity critics", "Theory of relativity" ]
8,565,423
https://en.wikipedia.org/wiki/No-teleportation%20theorem
In quantum information theory, the no-teleportation theorem states that an arbitrary quantum state cannot be converted into a sequence of classical bits (or even an infinite number of such bits); nor can such bits be used to reconstruct the original state, thus "teleporting" it by merely moving classical bits around. Put another way, it states that the unit of quantum information, the qubit, cannot be exactly, precisely converted into classical information bits. This should not be confused with quantum teleportation, which does allow a quantum state to be destroyed in one location, and an exact replica to be created at a different location. In crude terms, the no-teleportation theorem stems from the Heisenberg uncertainty principle and the EPR paradox: although a qubit can be imagined to be a specific direction on the Bloch sphere, that direction cannot be measured precisely, for the general case ; if it could, the results of that measurement would be describable with words, i.e. classical information. The no-teleportation theorem is implied by the no-cloning theorem: if it were possible to convert a qubit into classical bits, then a qubit would be easy to copy (since classical bits are trivially copyable). Formulation The term quantum information refers to information stored in the state of a quantum system. Two quantum states ρ1 and ρ2 are identical if the measurement results of any physical observable have the same expectation value for ρ1 and ρ2. Thus measurement can be viewed as an information channel with quantum input and classical output, that is, performing measurement on a quantum system transforms quantum information into classical information. On the other hand, preparing a quantum state takes classical information to quantum information. In general, a quantum state is described by a density matrix. Suppose one has a quantum system in some mixed state ρ. Prepare an ensemble, of the same system, as follows: Perform a measurement on ρ. According to the measurement outcome, prepare a system in some pre-specified state. The no-teleportation theorem states that the result will be different from ρ, irrespective of how the preparation procedure is related to measurement outcome. A quantum state cannot be determined via a single measurement. In other words, if a quantum channel measurement is followed by preparation, it cannot be the identity channel. Once converted to classical information, quantum information cannot be recovered. In contrast, perfect transmission is possible if one wishes to convert classical information to quantum information then back to classical information. For classical bits, this can be done by encoding them in orthogonal quantum states, which can always be distinguished. See also Among other no-go theorems in quantum information are: No-communication theorem. Entangled states cannot be used to transmit classical information. No-cloning theorem. Quantum states cannot be copied. No-broadcast theorem. A generalization of the no cloning theorem, to the case of mixed states. No-deleting theorem. A result dual to the no-cloning theorem: copies cannot be deleted. With the aid of shared entanglement, quantum states can be teleported, see Quantum teleportation References Jozef Gruska, Iroshi Imai, "Power, Puzzles and Properties of Entanglement" (2001) pp 25–68, appearing in Machines, Computations, and Universality: Third International Conference. edited by Maurice Margenstern, Yurii Rogozhin. (see p 41) Anirban Pathak, Elements of Quantum Computation and Quantum Communication (2013) CRC Press. (see p. 128) Quantum information theory Limits of computation No-go theorems
No-teleportation theorem
[ "Physics" ]
756
[ "Physical phenomena", "No-go theorems", "Equations of physics", "Limits of computation", "Physics theorems" ]
5,399,607
https://en.wikipedia.org/wiki/Courtesy%20call
A courtesy call is a call or visit made out of politeness. It is usually done between two parties of high position such as a government official to meet and briefly discuss about important or concerning matters. Diplomacy In diplomacy, a courtesy call is a formal meeting in which a diplomat or representative or a famous person of a nation pays a visit out of courtesy to a head of state or state office holder. Courtesy calls may be paid by another head of state, a prime minister, a minister (Government), or a diplomat. The meeting is usually of symbolic value and rarely involves a detailed discussion of issues. A newly appointed head of mission will usually make a courtesy call to the receiving foreign minister, head of government, and often other dignitaries such as the local mayor. It is also customary for a new head of mission to make courtesy calls to other heads of missions in the capital and often to receive return courtesy calls. Neglecting to pay a courtesy call to missions of smaller countries may result in them resenting the newly arrived mission head. Upon the departure of a head of mission, an additional round of courtesy calls is often expected. Fulfilling this protocol obligation is a time consuming task, with one diplomat noting it took him five months to complete a round in Washington DC. Diplomatic convention states courtesy calls last 20 minutes, which is some cases is excessive with both sides searching frantically for what to say, though some ambassadors consult an encyclopedia prior to the call to prepare talking points. In other cases, in which the meeting sides have joint items to discuss, a call may last an hour or two. Diplomatic personnel are split on the value of courtesy calls, some seeing them as a time wasting tradition while others see them as a means to secure a valuable introduction. In some cases, it is possible to arrange a joint courtesy call by visiting a senior ambassador who will, by prearrangement, assemble his regional colleagues for the meeting. Courtesy calls to cabinet members and members of parliament or congress are important and may lay the foundation for a continuing relationship. In Western democracies, ambassadors will pay calls to leaders of minor and major opposition parties as a change of government may occur at a future point. Such calls are important, and the ambassador must take care to cultivate the opposition without offending the incumbents. Calls to civic dignitaries of major cities, newspaper editors, and trade unions are also performed. Naval Naval courtesy calls were common in the 19th century. The American Great White Fleet paid a series of courtesy calls to ports around the world in a show of American naval strength in 1907–1909. United States Navy regulations require that (upon joining a new ship or station) an officer must make a courtesy call to his new commanding officer or commandant within 48 hours after joining. Business In business, a courtesy call is a visit or call from a company to customers for the purposes of gauging satisfaction or to thank them for their patronage. References State ritual and ceremonies Etiquette
Courtesy call
[ "Biology" ]
595
[ "Etiquette", "Behavior", "Human behavior" ]
5,399,648
https://en.wikipedia.org/wiki/CXOU%20J061705.3%2B222127
CXOU J061705.3+222127 is a neutron star. It was likely formed 30,000 years ago in the supernova that created the supernova remnant IC 443, the "Jellyfish Nebula." It is travelling at approximately 800,000 km/h away from the site. References See also IC 443, the likely supernova remnant of the creation event of the star. Gemini (constellation) Neutron stars
CXOU J061705.3+222127
[ "Astronomy" ]
94
[ "Gemini (constellation)", "Constellations" ]
5,399,808
https://en.wikipedia.org/wiki/Pirate%20game
The pirate game is a simple mathematical game. It is a multi-player version of the ultimatum game. The game There are five rational pirates (in strict decreasing order of seniority A, B, C, D and E) who found 100 gold coins. They must decide how to distribute them. The pirate world's rules of distribution say that the most senior pirate first proposes a plan of distribution. The pirates, including the proposer, then vote on whether to accept this distribution. If the majority accepts the plan, the coins are disbursed and the game ends. In case of a tie vote, the proposer has the casting vote. If the majority rejects the plan, the proposer is thrown overboard from the pirate ship and dies, and the next most senior pirate makes a new proposal to begin the system again. The process repeats until a plan is accepted or if there is one pirate left. Pirates base their decisions on four factors: Each pirate wants to survive. Given survival, each pirate wants to maximize the number of gold coins he receives. Each pirate would prefer to throw another overboard, if all other results would otherwise be equal. The pirates do not trust each other, and will neither make nor honor any promises between pirates apart from a proposed distribution plan that gives a whole number of gold coins to each pirate. The result To increase the chance of their plan being accepted, one might expect that Pirate A will have to offer the other pirates most of the gold. However, this is far from the theoretical result. When each of the pirates votes, they will not just be thinking about the current proposal, but also other outcomes down the line. In addition, the order of seniority is known in advance so each of them can accurately predict how the others might vote in any scenario. This becomes apparent if we work backwards. The final possible scenario would have all the pirates except D and E thrown overboard. Since D is senior to E, they have the casting vote; so, D would propose to keep 100 for themself and 0 for E. If there are three left (C, D and E), C knows that D will offer E 0 in the next round; therefore, C has to offer E one coin in this round to win E's vote. Therefore, when only three are left the allocation is C:99, D:0, E:1. If B, C, D and E remain, B can offer 1 to D; because B has the casting vote, only D's vote is required. Thus, B proposes B:99, C:0, D:1, E:0. (In the previous round, one might consider proposing B:99, C:0, D:0, E:1, as E knows it won't be possible to get more coins, if any, if E throws B overboard. But, as each pirate is eager to throw the others overboard, E would prefer to kill B, to get the same amount of gold from C.) With this knowledge, A can count on C and E's support for the following allocation, which is the final solution: A: 98 coins B: 0 coins C: 1 coin D: 0 coins E: 1 coin (Note: A:98, B:0, C:0, D:1, E:1 or other variants are not good enough, as D would rather throw A overboard to get the same amount of gold from B.) Extension The solution follows the same general pattern for other numbers of pirates and/or coins. However, the game changes in character when it is extended beyond there being twice as many pirates as there are coins. Ian Stewart wrote about Steve Omohundro's extension to an arbitrary number of pirates in the May 1999 edition of Scientific American and described the rather intricate pattern that emerges in the solution. Supposing there are just 100 gold pieces, then: Pirate #201 as captain can stay alive only by offering all the gold one each to the lowest odd-numbered pirates, keeping none. Pirate #202 as captain can stay alive only by taking no gold and offering one gold each to 100 pirates who would not receive a gold coin from #201. Therefore, there are 101 possible recipients of these one gold coin bribes being the 100 even-numbered pirates up to 200 and number #201. Since there are no constraints as to which 100 of these 101 they will choose, any choice is equally good and they can be thought of as choosing at random. This is how chance begins to enter the considerations for higher-numbered pirates. Pirate #203 as captain will not have enough gold available to bribe a majority, and so will die. Pirate #204 as captain has #203's vote secured without bribes: #203 will only survive if #204 also survives. So #204 can remain safe by reaching 102 votes by bribing 100 pirates with one gold coin each. This seems most likely to work by bribing odd-numbered pirates optionally including #202, who will get nothing from #203. However, it may also be possible to bribe others instead as they only have a 100/101 chance of being offered a gold coin by pirate #202. With 205 pirates, all pirates bar #205 prefer to kill #205 unless given gold, so #205 is doomed as captain. Similarly with 206 or 207 pirates, only votes of #205 to #206/7 are secured without gold which is insufficient votes, so #206 and #207 are also doomed. For 208 pirates, the votes of self-preservation from #205, #206, and #207 without any gold are enough to allow #208 to reach 104 votes and survive. In general, if G is the number of gold pieces and N (> 2G) is the number of pirates, then All pirates whose number is less than or equal to 2G + M will survive, where M is the highest power of 2 that does not exceed N – 2G. Any pirates whose number exceeds 2G + M will die. Any pirate whose number is greater than 2G + M/2 will receive no gold. There is no unique solution as to who gets one gold coin and who does not if the number of pirates is 2G+2 or greater. A simple solution dishes out one gold to the odd or even pirates up to 2G depending whether M is an even or odd power of 2. Another way to see this is to realize that every pirate M will have the vote of all the pirates from M/2 + 1 to M out of self preservation since their survival is secured only with the survival of the pirate M. Because the highest ranking pirate can break the tie, the captain only needs the votes of half of the pirates over 2G, which only happens each time (2G + a Power of 2) is reached. For instance, with 100 gold pieces and 500 pirates, pirates #500 through #457 die, and then #456 survives (as 456 = 200 + 28) as they have the 128 guaranteed self-preservation votes of pirates #329 through #456, plus 100 votes from the pirates they bribe, making up the 228 votes that they need. The numbers of pirates past #200 who can guarantee their survival as captain with 100 gold pieces are #201, #202, #204, #208, #216, #232, #264, #328, #456, #712, etc.: they are separated by longer and longer strings of pirates who are doomed no matter what division they propose. See also Creative problem solving Lateral thinking Notes References Non-cooperative games Puzzles
Pirate game
[ "Mathematics" ]
1,556
[ "Game theory", "Non-cooperative games" ]
5,399,866
https://en.wikipedia.org/wiki/Blue%20Mesa%20Reservoir
Blue Mesa Reservoir is an artificial reservoir located on the upper reaches of the Gunnison River in Gunnison County, Colorado. The largest lake located entirely within the state, Blue Mesa Reservoir was created by the construction of Blue Mesa Dam, a tall earthen fill dam constructed on the Gunnison by the U.S. Bureau of Reclamation in 1966 for the generation of hydroelectric power. Managed as part of the Curecanti National Recreation Area, a unit of the National Park Service, Blue Mesa Reservoir is the largest lake trout and Kokanee salmon fishery in Colorado. History In 1956, the U.S. Bureau of Reclamation was given the responsibility, under the Colorado River Storage Project Act, to begin planning and construction of the Colorado River Storage Project (CRSP), a series of projects in Colorado, New Mexico, Utah, and Wyoming that would make possible comprehensive development of the waters of the Colorado River and its major tributaries. One of the initial projects of the CRSP, the Curecanti Unit, focused on the upper reaches of the Gunnison River, the fifth-largest tributary of the Colorado River. The centerpiece of the plans for the Gunnison was the construction of four dams on a of the river east of the National Park Service's Black Canyon of the Gunnison National Park, projects that would not only help control the amount of water flowing into the Colorado, but would also create new opportunities for flood control, water storage, and the generation of hydroelectric power. The first of these dams was Blue Mesa Dam, which was begun in 1962 approximately west of Gunnison, and west of Sapinero. Finished four years later, the dam created Blue Mesa Reservoir, which became the primary water storage reservoir for the Curecanti Unit (later renamed the Wayne N. Aspinall Unit). Climate According to the Köppen Climate Classification system, Blue Mesa Reservoir has a warm-summer humid continental climate, abbreviated "Dfb" on climate maps. The hottest temperature recorded here was on July 22, 2005, while the coldest temperature recorded was on January 7, 1971. Location and access Blue Mesa Dam is located on the Gunnison river approximately 30 miles west of the city of Gunnison, near the intersection of U.S. 50 with Colorado Highway 92, which travels along the top of the dam. The reservoir extends east 20 miles and is composed of three main basins, the Iola, the Cebolla and the Sapinero, from east to west. U.S. 50 traverses the northern shore of both the Iola and Cebolla Basins before crossing the reservoir south on the Middle Bridge and continuing west to the dam. The southern shore of the Iola Basin can be reached via Colorado Highway 149, which begins at an intersection with U.S. 50 at the Lake City Bridge, approximately 7 miles west of the city of Gunnison. While most of the recreational areas at Blue Mesa can be accessed from U.S. 50, the reservoir also contains a small number of facilities on the lake's deep arms, such as Cebolla Creek and Soap Creek, which can only be reached by boat or unpaved road. Recreation activities While it was the Bureau of Reclamation that conceived of the plan to impound the Gunnison and constructed the Blue Mesa Dam, it was the National Park Service that was tasked with developing and managing recreational facilities at Blue Mesa Reservoir and the two smaller lakes to the west. As a unit of the Curecanti National Recreation Area Blue Mesa Reservoir offers a number of recreational opportunities, including boating, fishing, boat-in, developed, and primitive camping, hiking, horseback riding, and hunting. A popular lake for boating, Blue Mesa has marinas at Elk Creek and Lake Fork, near the dam, both of which can be accessed by U.S. 50. Watercraft can also be launched from the Ponderosa and Stevens Creek campgrounds, and at Iola, on the reservoir's southern shore. During winter months Iola Basin is also a popular spot for ice fishing. Blue Mesa contains 8 developed campgrounds, two of which are designated for groups. These range from the 160-site Elk Creek on the main body of the lake to smaller, more remote sites like Ponderosa and Gateview located on arms of the lake. Several of the campsites can accommodate RV's, but only Elk Creek offers electrical hook-ups. Boaters may camp overnight in 4 free camping areas with a total of 9 individual sites. Boaters may also camp on the southern shore of the Cebolla and Iola Basins, as long as campsites are not within a half-mile of any developed area, bridge, maintained public road or other boat-in/backcountry campsite. In 2022, decreases in Blue Mesa's already low water level caused by drought and the need to release water to aid Lake Powell downriver led to the first cancellation of boating season as announced on May 1, 2022. Federal and state water managers indicated in early May, 2022 the plan for 2022 would allow the Blue Mesa Reservoir to recover water levels, although they would remain insufficient for opening the marinas. Nearby attractions In addition to Blue Mesa Reservoir, Curecanti NRA also contains two other Bureau of Reclamation projects, Morrow Point Reservoir and Crystal Reservoir. Part of the same project that created Blue Mesa, both Morrow Point and Crystal are smaller, narrower lakes, located within the Black Canyon of the Gunnison. Though considerably harder to access than Blue Mesa, these two lakes nevertheless offer visitors unique views and challenging recreational opportunities. West of Blue Mesa and immediately downriver from Crystal Dam is Black Canyon of the Gunnison National Park, a National Park Service unit that offers camping, hiking, and views of the river and the surrounding 1900-ft. deep canyon. North of the reservoir is the Gunnison Ranger District of the Grand Mesa, Uncompahgre and Gunnison National Forests, a unit of the U.S. Forest Service. The Gunnison Ranger District includes almost 30 campgrounds and a number of hiking and equestrian trails spread across 1.3 million acres. Towns near Blue Mesa include Gunnison to the east and Montrose and Delta to the west, all of which can be accessed via U.S. 50. See also List of largest reservoirs of Colorado Colorado River Storage Project Curecanti National Recreation Area Blue Mesa Dam Morrow Point Reservoir Crystal Reservoir References External links Bureau Of Reclamation: Blue Mesa Dam NPS: Curecanti National Recreation Area Reservoirs in Colorado Gunnison River Colorado River Storage Project Curecanti National Recreation Area Reservoirs and dams in National Park Service areas Lakes of Gunnison County, Colorado Protected areas of Gunnison County, Colorado 1965 establishments in Colorado
Blue Mesa Reservoir
[ "Engineering" ]
1,356
[ "Colorado River Storage Project" ]
5,399,888
https://en.wikipedia.org/wiki/Blue%20Mesa%20Dam
Blue Mesa Dam is a zoned earthfill dam on the Gunnison River in Colorado. It creates Blue Mesa Reservoir, and is within Curecanti National Recreation Area just before the river enters the Black Canyon of the Gunnison. The dam is upstream of the Morrow Point Dam. Blue Mesa Dam and reservoir are part of the Bureau of Reclamation's Wayne N. Aspinall Unit of the Colorado River Storage Project, which retains the waters of the Colorado River and its tributaries for agricultural and municipal use in the American Southwest. Although the dam does produce hydroelectric power, its primary purpose is water storage. State Highway 92 passes over the top of the dam. Blue Mesa Dam houses two turbine generators and produces an average of 264,329,000 kilowatt-hours each year. Description The dam stands in an area where sandstone and shale overlay pre-Cambrian granite, schist and gneiss. It is situated at a narrows in the river valley where the Gunnison enters the upper reaches of the Black Canyon of the Gunnison. The dam has a volume of and the spillway intake structure has two radial gates. These discharge into a concrete-lined tunnel which in turn discharges through a flip bucket into a stilling basin. History The Curecanti Project (later renamed the Wayne N. Aspinall Project) was conceived in 1955, initially with four dams. It was approved by the Secretary of the Interior in 1959, comprising Blue Mesa Dam and Morrow Point Dam. Crystal Dam's design was unfinished and was approved in 1962. Plans for a fourth dam were dropped as uneconomical. The project was restricted to the stretch of the Gunnison above Black Canyon of the Gunnison National Monument (later designated a national park), a length of the river. Initially planned as a concrete dam, the project was changed to an earth-fill design. Work on the dam started in 1961, with foundation drilling and survey work. Construction of the reservoir required the relocation of US 50 and State Highway 149. This relocation was among the first work to be performed, starting in 1962 and continuing through 1964. The Sapinero Cemetery was also relocated. The primary construction contract for the dam was awarded to the Tecon Corporation of Dallas, Texas, with notice to proceed on April 23, 1962. The diversion tunnel was holed through on September 7, 1962, with the excavation of the spillway tunnel completed by April 1963. Drilling and grouting for the dam's foundation started in March 1963. The Gunnison was diverted through its tunnel in October, with excavation of the foundation to bedrock immediately after. Placement of the dam embankments started in 1964, and continued through the year, with the dam embankment completed at the end of 1965. The diversion tunnel was partly closed in December and the reservoir began to fill, with the final closure of the diversion tunnel on February 7, 1966. The dam project was declared complete on October 19, 1966. The powerplant project was delayed by a delivery accident to a transformer, which was damaged in an accident in September 1966 near Monarch Pass and had to be shipped back to its manufacturer in Sweden for repair. The powerplant was completed on February 16, 1968. Spillway modifications took place in 1984-85 to repair the damage, while a uniform and largely cosmetic covering of riprap was applied to the dam face. Powerplant The Blue Mesa Powerplant is fed by one diameter penstock, which supplies two turbines, as well as feeding the outlet works. The laterals feeding the Francis turbines are controlled by butterfly valves. The initial generating capacity was 60 MW, increased in 1988 to 86.4 MW. The powerplant is located above ground at the toe of the dam. It operates as a peaking plant. References External links Blue Mesa Dam at the Bureau of Reclamation Blue Mesa Powerplant at the Bureau of Reclamation Wayne N. Aspinall Storage Unit at Curecanti National Recreation Area Dams in Colorado Hydroelectric power plants in Colorado Buildings and structures in Gunnison County, Colorado Colorado River Storage Project Curecanti National Recreation Area United States Bureau of Reclamation dams Dams completed in 1966 Dams on the Gunnison River Earth-filled dams 1966 establishments in Colorado
Blue Mesa Dam
[ "Engineering" ]
840
[ "Colorado River Storage Project" ]
5,400,483
https://en.wikipedia.org/wiki/270%20%28number%29
270 (two hundred [and] seventy) is the natural number following 269 and preceding 271. In mathematics 270 is a harmonic divisor number. 270 degrees is equal to three-fourths of a turn. This also means that radians. In politics In the United States electoral college, 270 is the minimum number of electors, out of 538, required for a presidential candidate to be elected to the presidency. In other fields 270 (meaning 270°) is an informal way of referring to 3/4 of a turn, and is used as such in some sports. For instance, Yuto Horigome won gold in street skateboarding at the 2024 Olympic Games by performing his signature trick, a "nollie 270 noseblunt slide". References Integers
270 (number)
[ "Mathematics" ]
159
[ "Mathematical objects", "Number stubs", "Elementary mathematics", "Integers", "Numbers" ]
5,400,781
https://en.wikipedia.org/wiki/Morrow%20Point%20Dam
Morrow Point Dam is a concrete double-arch dam on the Gunnison River located in Colorado, the first dam of its type built by the U.S. Bureau of Reclamation. Located in the upper Black Canyon of the Gunnison, it creates Morrow Point Reservoir, and is within the National Park Service-operated Curecanti National Recreation Area. The dam is between the Blue Mesa Dam (upstream) and the Crystal Dam (downstream). Morrow Point Dam and reservoir are part of the Bureau of Reclamation's Wayne N. Aspinall Unit of the Colorado River Storage Project, which retains the waters of the Colorado River and its tributaries for agricultural and municipal use in the American Southwest. The dam's primary purpose is hydroelectric power generation. Description The dam, powerplant and reservoir are contained in pre-Cambrian metamorphic rocks, primarily micaceous quartzite, quartz-mica, mica and biotite schists, with granitic veining. The dam site is in a narrow canyon about wide at the river and wide at the top. The spillway discharge falls into a stilling basin whose waters are retained by a weir below the dam. Intake structures near the south abutment feed two diameter penstock tunnels with steel linings leading to the powerplant. A streamflow of is maintained at all times, equivalent to per day. History The Curecanti Project (later renamed the Wayne N. Aspinall Project) was conceived in 1955, initially with four dams. It was approved by the Secretary of the Interior in 1959, comprising Blue Mesa Dam and Morrow Point Dam. Crystal Dam's design was unfinished and was approved in 1962. Plans for a fourth dam were dropped as uneconomical. The project was restricted to the stretch of the Gunnison above Black Canyon of the Gunnison National Monument (later designated a national park), a length of the river. Work began at the damsite in 1961 with foundation drilling. In 1962 the power plant exploratory tunnel was excavated. The construction contract for the dam was awarded to a joint venture between the Al Johnson Construction Company and Morrison-Knudsen, with notice to proceed given on June 13, 1963. Access roads and a diversion tunnel were begun that year, with the diversion tunnel complete by May 1964. Keyway excavation on either side of the dam continued through 1964. In 1965 work got underway on the powerplant, with several tunnels started. Concrete for the dam was first placed on September 3, 1965. The powerplant was excavated by April 1966. Final concrete placement on the dam took place on September 14, 1967. The diversion tunnel was closed on January 24, 1968, with releases through the outlet structures the next day. Final completion was achieved for the dam on October 7, 1968, while work continued on the powerplant. The plant was accepted and a visitor center was completed in 1971, with final completion on May 12, 1972. The dam's grout curtain was extended in 1970 after leakage into the power plant reached 429 gallons per minute, using asphaltic emulsion and cement grout, reducing leakage to 37 gpm. Powerplant Morrow Point Dam's powerplant is tunneled into the canyon wall below the surface at the dam's left abutment. It houses two 86.667 MW generators, uprated from 60 MW each in 1992-1993. The generating hall measures by , with between and of height. First operating in 1970, it is operated as a peaking plant. An exploratory tunnel became a ventilation tunnel, while initial access during construction was made through the cable tunnel, with two headings raising the head of the tunnel arch. An access tunnel intersects the generating hall at a right angle, with two draft tubes excavated below. In irrigation season the powerplant is operated as a base load plant, providing peaking power in other seasons. References External links Morrow Point Dam at the Bureau of Reclamation Morrow Point Powerplant at the Bureau of Reclamation Wayne N. Aspinall Storage Unit at Curecanti National Recreation Area Dams in Colorado Dams on the Gunnison River Arch dams Buildings and structures in Montrose County, Colorado Colorado River Storage Project Curecanti National Recreation Area Hydroelectric power plants in Colorado United States Bureau of Reclamation dams Dams completed in 1968 Energy infrastructure completed in 1968 1968 establishments in Colorado
Morrow Point Dam
[ "Engineering" ]
861
[ "Colorado River Storage Project" ]
5,400,794
https://en.wikipedia.org/wiki/Vesicular%20monoamine%20transporter%201
Vesicular monoamine transporter 1 (VMAT1) also known as chromaffin granule amine transporter (CGAT) or solute carrier family 18 member 1 (SLC18A1) is a protein that in humans is encoded by the SLC18A1 gene. VMAT1 is an integral membrane protein, which is embedded in synaptic vesicles and serves to transfer monoamines, such as norepinephrine, epinephrine, dopamine, and serotonin, between the cytosol and synaptic vesicles. SLC18A1 is an isoform of the vesicular monoamine transporter. Discovery The idea that there must be specific transport proteins associated with the uptake of monoamines and acetylcholine into vesicles developed due to the discovery of specific inhibitors which interfered with monoamine neurotransmission and also depleted monoamines in neuroendocrine tissues. VMAT1 and VMAT2 were first identified in rats upon cloning cDNAs for proteins which gave non-amine accumulating recipient cells the ability to sequester monoamines. Subsequently, human VMATs were cloned using human cDNA libraries with the rat homologs as probes, and heterologous-cell amine uptake assays were performed to verify transport properties. Structure Across mammalian species, VMATs have been found to be structurally well conserved; VMAT1s have an overall sequence identity exceeding 80%. However, there exists only a 60% sequence identity between the human VMAT1 and VMAT2. VMAT1 is an acidic glycoprotein with an apparent weight of 40 kDa. Although the crystallographic structure has not yet been fully resolved, VMAT1 is known to have either twelve transmembrane domains (TMDs), based on Kyte-Doolittle hydrophobicity scale analysis or ten TMDs, based on MAXHOM alignment. MAXHOM alignment was determined using the "profile-fed neural network systems from Heidelberg" (PHD) program. The main difference between these two models arises from the placement of TMDs II and IV in the vesicle lumen or the cytoplasm. Localization Cell types VMATs are found in a variety of cell types throughout the body, however, VMAT1 is found exclusively in neuroendocrine cells, in contrast to VMAT2, which is also found in the PNS and CNS. Specifically, VMAT1 is found in chromaffin cells, enterochromaffin cells, and small intensely fluorescent cells (SIFs). Chromaffin cells are responsible for releasing the catecholamines (norepinephrine and epinephrine) into systemic circulation. Enterochromaffin cells are responsible for storing serotonin in the gastrointestinal tract. SIFs are interneurons associated with the sympathetic nervous system which are managed by dopamine. Vesicles VMAT1 is found in both large dense-core vesicles (LDCVs) as well as in small synaptic vesicles (SSVs). This was discovered via studying rat adrenal medulla cells (PC12 cells). LDCVs are 70-200 nm in size and exist throughout the neuron (soma, dendrites, etc.). SSVs are much smaller (usually about 40 nm) and typically exist as clusters in the presynaptic cleft. Function Active transport of monoamines Driving force The active transport of monoamines from the cytosol into storage vesicles operates against a large (>105) concentration gradient. Secondary active transport is the type of active transport used, meaning that VMAT1 is an antiporter. This transport is facilitated via proton gradient generated by the protein proton ATPase. The inward transport of the monoamine is coupled with the efflux of two protons per monoamine. The first proton is thought to cause a change in VMAT1's conformation, which pushes a high affinity amine binding site, to which the monoamine attaches. The second proton then causes a second change in the conformation which pulls the monoamine into the vesicle and greatly reduces the affinity of the binding site for amines. A series of tests suggest that His419, located between TMDs X and XI, plays the key role in the first of these conformational changes, and that Asp431, located on TMD XI, does likewise during the second change. Inhibition Several reuptake inhibitors of VMATs are known to exist, including reserpine (RES), tetrabenazine (TBZ), dihydrotetrabenazine (DTBZOH), and ketanserin (KET). It is thought that RES exhibits competitive inhibition, binding to the same site as the monoamine substrate, as studies have shown that it can be displaced via introduction of norepinephrine. TBZ, DTBZOH, and KET are thought to exhibit non-competitive inhibition, instead binding to allosteric sites and decreasing the activity of the VMAT rather than simply blocking its substrate binding site. It has been found that these inhibitors are less effective at inhibiting VMAT1 than VMAT2, and the inhibitory effects of the tetrabenazines on VMAT1 is negligible. Clinical significance Pancreatic cancer The expression of VMAT1 in healthy endocrine cells was compared to VMAT1 expression in infants with hyperinsulinemic hypoglycemia and adults with pancreatic endocrine tumors. Through immunohistochemistry (IHC) and in situ hybridization (ISH), they found VMAT1 and VMAT2 were located in mutually exclusive cell types, and that in insulinomas VMAT2 activity disappeared, suggesting that if only VMAT1 activity is present in the endocrine system, this type of cancer is likely. Digestive system VMAT1 also has effects on the modulation of gastrin processing in G cells. These intestinal endocrine cells process amine precursors, and VMAT1 pulls them into vesicles for storage. The activity of VMAT1 in these cells has a seemingly inhibitory effect on the processing of gastrin. Essentially, this means that certain compounds in the gut can be taken into these G cells and either amplify or inhibit the function of VMAT1, which will impact gastrin processing (conversion from G34 to G17). Additionally, VMAT1 is known to play a role in the uptake and secretion of serotonin in the gut. Enterochromaffin cells in the intestines will secrete serotonin in response to the activation of certain mechanosensors. The regulation of serotonin in the gut is critically important, as it modulates appetite and controls intestinal contraction. Protection against hypothermia Presence of VMAT1 in cells has been shown to protect them from the damaging effects of cooling and rewarming associated with hypothermia. Experiments were carried out on aortic and kidney cells and tissues. Evidence was found that an accumulation of serotonin using VMAT1 and TPH1 allowed for the subsequent release of serotonin when exposed to cold temperatures. This allows cystathionine beta synthase (CBS) mediated generation of H2S. The protection against the damage caused by hypothermia is due to a reduction in the generation of reactive oxygen species (ROS), which can induce apoptosis, due to the presence of H2S. Mental disorders VMAT1 (SLC18A1) maps to a shared bipolar disorder(BPD)/schizophrenia locus, which is located on chromosome 8p21. It is thought that disruption in transport of monoamine neurotransmitters due to variation in the VMAT1 gene may be relevant to the etiology of these mental disorders. One study looked at a population of European descent, examining the genotypes of a bipolar group and a control group. The study confirmed expression of VMAT1 in the brain at a protein and mRNA level, and found a significant difference between the two groups, suggesting that, at least for people of European descent, variation in the VMAT1 gene may confer susceptibility. A second study examined a population of Japanese individuals, one group healthy and the other schizophrenic. This study resulted in mostly inconclusive findings, but some indications that variation in the VMAT1 gene would confer susceptibility to schizophrenia in Japanese women. While these studies provide some promising insight into the cause of some of the most prevalent mental disorders, it is clear that additional research will be necessary in order to gain a full understanding. References External links Amphetamine Biogenic amines Molecular neuroscience Neurotransmitter transporters Receptors Signal transduction Solute carrier family
Vesicular monoamine transporter 1
[ "Chemistry", "Biology" ]
1,880
[ "Biomolecules by chemical classification", "Biogenic amines", "Signal transduction", "Receptors", "Molecular neuroscience", "Molecular biology", "Biochemistry", "Neurochemistry" ]
5,401,178
https://en.wikipedia.org/wiki/Low-density%20lipoprotein%20receptor%20gene%20family
The low-density lipoprotein receptor gene family codes for a class of structurally related cell surface receptors that fulfill diverse biological functions in different organs, tissues, and cell types. The role that is most commonly associated with this evolutionarily ancient family is cholesterol homeostasis (maintenance of appropriate concentration of cholesterol). In humans, excess cholesterol in the blood is captured by low-density lipoprotein (LDL) and removed by the liver via endocytosis of the LDL receptor. Recent evidence indicates that the members of the LDL receptor gene family are active in the cell signalling pathways between specialized cells in many, if not all, multicellular organisms. There are seven members of the LDLR family in mammals, namely: LDLR VLDL receptor (VLDLR) ApoER2, or LRP8 Low density lipoprotein receptor-related protein 4 also known as multiple epidermal growth factor (EGF) repeat-containing protein (MEGF7) LDLR-related protein 1 LDLR-related protein 1b Megalin. Human proteins containing this domain Listed below are human proteins containing low-density lipoprotein receptor domains: Class A C6; C7; 8A; 8B; C9; CD320; CFI; CORIN; DGCR2; HSPG2; LDLR; LDLRAD2; LDLRAD3; LRP1; LRP10; LRP11; LRP12; LRP1B; LRP2; LRP3; LRP4; LRP5; LRP6; LRP8; MAMDC4; MFRP; PRSS7; RXFP1; RXFP2; SORL1; SPINT1; SSPO; ST14; TMPRSS4; TMPRSS6; TMPRSS7; TMPRSS9 (serase-1B); VLDLR; Class B EGF; LDLR; LRP1; LRP10; LRP1B; LRP2; LRP4; LRP5; LRP5L; LRP6; LRP8; NID1; NID2; SORL1; VLDLR; See also Soluble low-density lipoprotein receptor-related protein (sLRP) - impaired function is related to Alzheimer's disease. Structure The members of the LDLR family are characterized by distinct functional domains present in characteristic numbers. These modules are: LDL receptor type A (LA) repeats of 40 residues each, displaying a triple-disulfide-bond-stabilized negatively charged surface; certain head-to-tail combinations of these repeats are believed to specify ligand interactions; LDL receptor type B repeats, also known as EGF precursor homology regions, containing EGF-like repeats and YWTD beta propeller domains; a transmembrane domain, and the cytoplasmic region with (a) signal(s) for receptor internalization via coated pits, containing the consensus tetrapeptide Asn-Pro-Xaa-Tyr (NPxY). This cytoplasmic tail controls both endocytosis and signaling by interacting with the phosphotyrosine binding (PTB) domain-containing proteins. In addition to these domains which can be found in all receptors of the gene family, LDL receptor and certain isoforms of ApoER2 and VLDLR contain a short region which can undergo O-linked glycosylation, known as O-linked sugar domain. ApoER2 moreover, can harbour a cleavage site for the protease furin between type A and type B repeats which enables production of a soluble receptor fragment by furin-mediated processing. References External links Schematic representation of the seven mammalian LDL receptor (LDLR) family members LDL receptor family members Receptors Protein families Signal transduction Neurophysiology
Low-density lipoprotein receptor gene family
[ "Chemistry", "Biology" ]
824
[ "Protein classification", "Signal transduction", "Receptors", "Biochemistry", "Protein families", "Neurochemistry" ]
5,401,424
https://en.wikipedia.org/wiki/Cadmium%20nitrate
Cadmium nitrate describes any of the related members of a family of inorganic compounds with the general formula . The most commonly encountered form being the tetrahydrate.The anhydrous form is volatile, but the others are colourless crystalline solids that are deliquescent, tending to absorb enough moisture from the air to form an aqueous solution. Like other cadmium compounds, cadmium nitrate is known to be carcinogenic. According to X-ray crystallography, the tetrahydrate features octahedral Cd2+ centers bound to six oxygen ligands. Uses Cadmium nitrate is used for coloring glass and porcelain and as a flash powder in photography. Preparation Cadmium nitrate is prepared by dissolving cadmium metal or its oxide, hydroxide, or carbonate, in nitric acid followed by crystallization: Reactions Thermal dissociation at elevated temperatures produces cadmium oxide and oxides of nitrogen. When hydrogen sulfide is passed through an acidified solution of cadmium nitrate, yellow cadmium sulfide is formed. A red modification of the sulfide is formed under boiling conditions. When treated with sodium hydroxide, solutions of cadmium nitrate yield a solid precipitate of cadmium hydroxide. Many insoluble cadmium salts are obtained by such precipitation reactions. References External links Cadmium compounds Nitrates Deliquescent materials IARC Group 1 carcinogens
Cadmium nitrate
[ "Chemistry" ]
287
[ "Oxidizing agents", "Salts", "Nitrates", "Deliquescent materials" ]
5,401,463
https://en.wikipedia.org/wiki/Neuroproteomics
Neuroproteomics is the study of the protein complexes and species that make up the nervous system. These proteins interact to make the neurons connect in such a way to create the intricacies that nervous system is known for. Neuroproteomics is a complex field that has a long way to go in terms of profiling the entire neuronal proteome. It is a relatively recent field that has many applications in therapy and science. So far, only small subsets of the neuronal proteome have been mapped, and then only when applied to the proteins involved in the synapse. History Origins The word proteomics was first used in 1994 by Marc Wilkins as the study of “the protein equivalent of a genome”. It is defined as all of the proteins expressed in a biological system under specific physiologic conditions at a certain point in time. It can change with any biochemical alteration, and so it can only be defined under certain conditions. Neuroproteomics is a subset of this field dealing with the complexities and multi-system origin of neurological disease. Neurological function is based on the interactions of many proteins of different origin, and so requires a systematic study of subsystems within its proteomic structure. Modern times Neuroproteomics has the difficult task of defining on a molecular level the pathways of consciousness, senses, and self. Neurological disorders are unique in that they do not always exhibit outward symptoms. Defining the disorders becomes difficult and so neuroproteomics is a step in the right direction of identifying bio-markers that can be used to detect diseases. Not only does the field have to map out the different proteins possible from the genome, but there are many modifications that happen after transcription that affect function as well. Because neurons are such dynamic structures, changing with every action potential that travels through them, neuroproteomics offers the most potential for mapping out the molecular template of their function. Genomics offers a static roadmap of the cell, while proteomics can offer a glimpse into structures smaller than the cell because of its specific nature to each moment in time. Mechanisms of Use Protein Separation In order for neuroproteomics to function correctly, proteins must be separated in terms of the proteome from which they came. For example, one set might be under normal conditions, while another might be under diseased conditions. Proteins are commonly separated using two-dimensional polyacrylamide gel electrophoresis (2D PAGE). For this technique, proteins are run across an immobile gel with a pH gradient until they stop at the point where their net charge is neutral. After separating by charge in one direction, sodium dodecyl sulfate is run in the other direction to separate the proteins by size. A two-dimensional map is created using this technique that can be used to match additional proteins later. One can usually match the function of a protein by identifying in an 2D PAGE in simple proteomics because many intracellular somatic pathways are known. In neuroproteomics, however, many proteins combine to give an end result that may be neurological disease or breakdown. It is necessary then to study each protein individually and find a correlation between the different proteins to determine the cause of a neurological disease. New techniques are being developed that can identify proteins once they are separated out using 2D PAGE. Protein Identification Protein separate techniques, such as 2D PAGE, are limited in that they cannot handle very high or low molecular weight protein species. Alternative methods have been developed to deal with such cases. These include liquid chromatography mass spectrometry along with sodium dodecyl sulfate polyacrylamide gel electrophoresis, or liquid chromatography mass spectrometry run in multiple dimensions. Compared to simple 2D page, liquid chromatography mass spectrometry can handle a larger range of protein species size, but it is limited in the amount of protein sample it handle at once. Liquid chromatography mass spectrometry is also limited in its lack of a reference map from which to work with. Complex algorithms are usually used to analyze the fringe results that occur after a procedure is run. The unknown portions of the protein species are usually not analyzed in favor of familiar proteomes, however. This fact reveals a fault with current technology; new techniques are needed to increase both the specificity and scope of proteome mapping. Applications Drug Addiction It is commonly known that drug addiction involves permanent synaptic plasticity of various neuronal circuits. Neuroproteomics is being applied to study the effect of drug addiction across the synapse. Research is being conducted by isolating distinct regions of the brain in which synaptic transmission takes place and defining the proteome for that particular region. Different stages of drug abuse must be studied, however, in order to map out the progression of protein changes along the course of the drug addiction. These stages include enticement, ingesting, withdrawal, addiction, and removal. It begins with the change in the genome through transcription that occurs due to the abuse of drugs. It continues to identify the most likely proteins to be affected by the drugs and focusing in on that area. For drug addiction, the synapse is the most likely target as it involves communication between neurons. Lack of sensory communication in neurons is often an outward sign of drug abuse, and so neuroproteomics is being applied to find out what proteins are being affected to prevent the transport of neurotransmitters. In particular, the vesicle releasing process is being studied to identify the proteins involved in the synapse during drug abuse. Proteins such as synaptotagmin and synaptobrevin interact to fuse the vesicle into the membrane. Phosphorylation also has its own set of proteins involved that work together to allow the synapse to function properly. Drugs such as morphine change properties such as cell adhesion, neurotransmitter volume, and synaptic traffic. After significant morphine application, tyrosine kinases received less phosphorylation and thus send fewer signals inside the cell. These receptor proteins are unable to initiate the intracellular signaling processes that enable the neuron to live, and necrosis or apoptosis may be the result. With more and more neurons affected along this chain of cell death, permanent loss of sensory or motor function may be the result. By identifying the proteins that are changed with drug abuse, neuroproteomics may give clinicians even earlier biomarkers to test for to prevent permanent neurological damage. Recently, a novel terminology (Psychoproteomics) has been coined by the University of Florida researchers from Dr. Mark S Gold Lab. Kobeissy et al. defined Psychoproteomics as integral proteomics approach dedicated to studying proteomic changes in the field of psychiatric disorders, particularly substance-and drug-abuse neurotoxicity. Brain Injury Traumatic brain injury is defined as a “direct physical impact or trauma to the head followed by a dynamic series of injury and repair events”. Recently, neuroproteomics have been applied to studying the disability that over 5.4 million Americans live with. In addition to physically injuring the brain tissue, traumatic brain injury induces the release of glutamate that interacts with ionotropic glutamate receptors (iGluRs). These glutamate receptors acidify the surrounding intracranial fluid, causing further injury on the molecular level to nearby neurons. The death of the surrounding neurons is induced through normal apoptosis mechanisms, and it is this cycle that is being studied with neuroproteomics. Three different cysteine protease derivatives are involved in the apoptotic pathway induced by the acidic environment triggered by glutamate. These cysteine proteases include calpain, caspase, and cathepsin. These three proteins are examples of detectable signs of traumatic brain injury that are much more specific than temperature, oxygen level, or intracranial pressure. Proteomics thus also offers a tracking mechanism by which researchers can monitor the progression of traumatic brain injury, or a chronic disease such as Alzheimer’s or Parkinson’s. Especially in Parkinson’s, in which neurotransmitters play a large role, recent proteomic research has involved the study of synaptotagmin. Synaptotagmin is involved in the calcium-induced budding of vesicle containing neurotransmitters from the presynaptic membrane. By studying the intracellular mechanisms involved in neural apoptosis after traumatic brain injury, researchers can create a map that genetic changes can follow later on. Nerve Growth One group of researchers applied the field of neuroproteomics to examine how different proteins affect the initial growth of neuritis. The experiment compared the protein activity of control neurons with the activity of neurons treated with nerve growth factor (NGF) and JNJ460, an “immunophilin ligand.” JNJ460 is an offspring of another drug that is used to prevent immune attack when organs are transplanted. It is not an immunosuppressant, however, but rather it acts as a shield against microglia. NGF promotes neuron viability and differentiation by binding to TrkA, a tyrosine receptor kinase. This receptor is important in initiating intracellular metabolic pathways, including Ras, Rak, and MAP kinase. Protein differentiation was measured in each cell sample with and without treatment by NGF and JNJ460. A peptide mixture was made by washing off unbound portions of the amino acid sequence in a reverse column. The resulting mixture was then suspended a peptide mixture in a bath of cation exchange fluid. The proteins were identified by splicing them with trypsin and then searching through the results of passing the product through a mass spectrometer. This applies a form of liquid chromatography mass spectrometry to identify proteins in the mixture JNJ460 treatment resulted in an increase in “signal transduction” proteins, while NGF resulted in an increase in proteins associated with the ribosome and synthesis of other proteins. JNJ460 also resulted in more structural proteins associated with intercellular growth, such as actin, myosin, and troponin. With NGF treatment, cells increased protein synthesis and creation of ribosomes. This method allows the analysis of all of the protein patterns overall, rather than a single change in an amino acid. Western blots confirmed the results, according to the researchers, though the changes in proteins were not as obvious in their protocol. The main significance to these findings are that JNJ460 are NGF are distinct processes that both control the protein output of the cell. JNJ460 resulted in increased neuronal size and stability while NGF resulted in increased membrane proteins. When combined, they significantly increase a neuron’s chance of growth. While JNJ460 may “prime” some parts of the cell for NGF treatment, they do not work together. JNJ460 is thought to interact with Schwann cells in regenerating actin and myosin, which are key players in axonal growth. NGF helps the neuron grow as a whole. These two proteins do not play a part in communication with other neurons, however. They merely increase the size of the membrane down which a signal can be sent. Other neurotrophic factor proteomes are needed to guide neurons to each other to create synapses. Limitations The broad scope of the available raw neuronal proteins to map requires that initial studies be focused on small areas of the neurons. When taking samples, there are a few places that interest neurologists most. The most important place to start for neurologists is the plasma membrane. This is where most of the communication between neurons takes place. The proteins being mapped here include ion channels, neurotransmitter receptors, and molecule transporters. Along the plasma membrane, the proteins involved in creating cholesterol-rich lipid rafts are being studied because they have been shown to be crucial for glutamate uptake during the initial stages of neuron formation. As mentioned before, vesicle proteins are also being studied closely because they are involved in disease. Collecting samples to study, however, requires special consideration to ensure that the reproducibility of the samples is not compromised. When taking a global sample of one area of the brain for example, proteins that are ubiquitous and relatively unimportant show up very clear in the SDS PAGE. Other unexplored, more specific proteins barely show up and are therefore ignored. It is usually necessary to divide up the plasma membrane proteome, for example, into subproteomes characterized by specific function. This allows these more specific classes of peptides to show up more clearly. In a way, dividing into subproteomes is simply applying a magnifying lens to a specific section of a global proteome’s SDS PAGE map. This method seems to be most effective when applied to each cellular organelle separately. Mitochondrial proteins, for example, which are more effective at transporting electrons across its membrane, can be specifically targeted effectively in order to match their electron-transporting ability to their amino acid sequence. References Bibliography Alzate O. "Neuroproteomics." Frontiers in Neuroscience Series (October 2010) C.R.C. Press. Abul-Husn, Noura S., Lakshmi A. Devi. "Neuroproteomics of the Synapse and Drug Addiction." The Journal of Pharmacology and Experimental Therapeutics 138 (2006): 461-468. Becker, Michael, Jens Schindler, Hans G. Nothwang. "Neuroproteomics - the Tasks Lying Ahead." Electrophoresis 27 (2006): 2819-2829. Butcher, James. "Neuroproteomics Comes of Age." The Lancet Neurology 6 (2007): 851-852. Kim, Sandra I., Hans Voshol, Jan van Oostrum, Terri G. Hastings, Michael Casico, Marc J. Glucksmann. “Neuroproteomics: Expression Profiling of the Brain’s Proteomes in Health and Disease.” Neurochemical Research 29 (2004): 1317-1331 Kobeissy, Firas H., Andrew K. Ottens, Zhiqun Zhang, Ming Cheng Liu, Nancy D. Denslow, Jitendra R. Dave, Frank C. Tortella, Ronald L. Hayes, Kevin K. Wang. "Novel Differential Neuroproteomics Analysis of Traumatic Brain Injury in Rats." Molecular & Cellular Proteomics 5 (2006): 1887-1898. Liu, Tong, Veera D'mello, Longwen Deng, Jun Hu, Michael Ricardo, Sanqiang Pan, Xiaodong Lu, Scott Wadsworth, John Siekierka, Raymond Birge, Hong Li. "A Multiplexed Proteomics Approach to Differentiate Neurite Outgrowth Patterns." Journal of Neuroscience Methods 158 (2006): 22-29. Ottens, Andrew K., Firas H. Kobeissy, Erin C. Golden, Zhiqun Zhang, William E. Haskins, Su-Shing Chen, Ronald L. Hayes, Kevin K. Wang, Nancy D. Denslow. "Neuroproteomics in Neurotrauma." Mass Spectrometry Reviews 25 (2006): 380-406. Ottens, Andrew K. "The methodology of neuroproteomics." Methods Mol Biol. (2009) 566:1-21. Southey, Bruce R., Andinet Amare, Tyler A. Zimmerman, Sandra L. Rodriguez, Jonathan V. Sweedler. "NeuroPred: a Tool to Predict Cleavage Sites in Neuropeptide Precursors and Provide the Masses of the Resulting Peptides." Nucleic Acids Research 34 (2006): 267-272. Tribl, F, K Marcus, G Bringmann, H.E. Meyer, M Gerlach, P Riederer. "Proteomics of the Human Brain: Sub-Proteomes Might Hold the Key to Handle Brain Complexity." Journal of Neural Transmission 113 (2006): 1041-1054. Williams, Kenneth, Terence Wu, Christopher Colangelo, Angus C. Nairn. "Recent Advances in Neuroproteomics and Potential Application to Studies of Drug Addiction." Neuropharmacology 47 (2004): 148-166. Kobeissy, Firas H., Sadasivan S, Liu J, Mark S Gold, Kevin K. Wang. "Psychiatric research: psychoproteomics, degradomics and systems biology." Expert Rev Proteomics 5 (2008): 293-314. Neurochemistry Proteomics
Neuroproteomics
[ "Chemistry", "Biology" ]
3,533
[ "Biochemistry", "Neurochemistry" ]
5,401,557
https://en.wikipedia.org/wiki/Iron%28III%29%20nitrate
Iron(III) nitrate, or ferric nitrate, is the name used for a series of inorganic compounds with the formula Fe(NO3)3.(H2O)n. Most common is the nonahydrate Fe(NO3)3.(H2O)9. The hydrates are all pale colored, water-soluble paramagnetic salts. Hydrates Iron(III) nitrate is deliquescent, and it is commonly found as the nonahydrate Fe(NO3)3·9H2O, which forms colourless to pale violet crystals. This compound is the trinitrate salt of the aquo complex [Fe(H2O)6]3+. Other hydrates ·x, include: tetrahydrate (x=4), more precisely triaqua dinitratoiron(III) nitrate monohydrate, , has complex cations wherein Fe3+ is coordinated with two nitrate anions as bidentate ligands and three of the four water molecules, in a pentagonal bipyramid configuration with two water molecules at the poles. pentahydrate (x=5), more precisely penta-aqua nitratoiron(III) dinitrate, , in which the Fe3+ ion is coordinated to five water molecules and a unidentate nitrate anion ligand in octahedral configuration. hexahydrate (x=6), more precisely hexaaquairon(III) trinitrate, , where the Fe3+ ion is coordinated to six water molecules in octahedral configuration. Reactions Iron(III) nitrate is a useful precursor to other iron compounds because the nitrate is easily removed or decomposed. It is for example, a standard precursor to potassium ferrate . When dissolved, iron(III) nitrate forms yellow solutions. When this solution is heated to near boiling, nitric acid evaporates and a solid precipitate of iron(III) oxide appears. Another method for producing iron oxides from this nitrate salt involves neutralizing its aqueous solutions. Preparation The compound can be prepared by treating iron metal powder with nitric acid, as summarized by the following idealized equation: Applications Ferric nitrate has no large scale applications. It is a catalyst for the synthesis of sodium amide from a solution of sodium in ammonia: Certain clays impregnated with ferric nitrate have been shown to be useful oxidants in organic synthesis. For example, ferric nitrate on Montmorillonite—a reagent called Clayfen—has been employed for the oxidation of alcohols to aldehydes and thiols to disulfides. Ferric nitrate solutions are used by jewelers and metalsmiths to etch silver and silver alloys. References Iron(III) compounds Nitrates Deliquescent materials Oxidizing agents
Iron(III) nitrate
[ "Chemistry" ]
607
[ "Redox", "Nitrates", "Salts", "Oxidizing agents", "Deliquescent materials" ]
5,401,558
https://en.wikipedia.org/wiki/Cleaner%20production
Cleaner production is a preventive, company-specific environmental protection initiative. It is intended to minimize waste and emissions and maximize product output. By analysing the flow of materials and energy in a company, one tries to identify options to minimize waste and emissions out of industrial processes through source reduction strategies. Improvements of organisation and technology help to reduce or suggest better choices in use of materials and energy, and to avoid waste, waste water generation, and gaseous emissions, and also waste heat and noise. Overview The concept was developed during the preparation of the Rio Summit as a programme of UNEP (United Nations Environmental Programme) and UNIDO (United Nations Industrial Development Organization) under the leadership of Jacqueline Aloisi de Larderel, the former Assistant Executive Director of UNEP. The programme was meant to reduce the environmental impact of industry. It built on ideas used by the company 3M in its 3P programme (pollution prevention pays). It has found more international support than all other comparable programmes. The programme idea was described "...to assist developing nations in leapfrogging from pollution to less pollution, using available technologies". Starting from the simple idea to produce with less waste Cleaner Production was developed into a concept to increase the resource efficiency of production in general. UNIDO has been operating National Cleaner Production Centers and Programmes (NCPCs/NCPPs) with centres in Latin America, Africa, Asia and Europe. Cleaner production is endorsed by UNEP's International Declaration on Cleaner Production, "a voluntary and public statement of commitment to the practice and promotion of Cleaner Production". Implementing guidelines for cleaner production were published by UNEP in 2001. In the US, the term pollution prevention is more commonly used for cleaner production. Options Examples for cleaner production options are: Documentation of consumption (as a basic analysis of material and energy flows, e. g. with a Sankey diagram) Use of indicators and controlling (to identify losses from poor planning, poor education and training, mistakes) Substitution of raw materials and auxiliary materials (especially renewable materials and energy) Increase of useful life of auxiliary materials and process liquids (by avoiding drag in, drag out, contamination) Improved control and automatisation Reuse of waste (internal or external) New, low waste processes and technologies Initiatives One of the first European initiatives in cleaner production was started in Austria in 1992 by the BMVIT (Bundesministerium für Verkehr, Innovation und Technologie). This resulted in two initiatives: "Prepare" and EcoProfit. The "PIUS" initiative was founded in Germany in 1999. Since 1994, the United Nations Industrial Development Organization operates the National Cleaner Production Centre Programme with centres in Central America, South America, Africa, Asia, and Europe. See also Cradle-to-cradle design Energy conservation Environmental management Environmental Quality Management Green design Industrial ecology ISO 9001 ISO 14001 Source reduction) Sustainability Total quality management Waste minimisation Clean Production Agreement References Bibliography Fresner, J., Bürki, T., Sittig, H., Ressourceneffizienz in der Produktion -Kosten senken durch Cleaner Production, , Symposion Publishing, 2009 Organisation For Economic Co-Operation And Development(OECD)(Hrsg.): Technologies For Cleaner Production And Products- Towards Technological Transformation For Sustainable Development. Paris: OECD, 1995 Google Books Pauli, G., From Deep Ecology to The Blue Economy, 2011, ZERI Schaltegger, S.; Bennett, M.; Burritt, R. & Jasch, C.: Environmental Management Accounting as a Support for Cleaner Production, in: Schaltegger, S.; Bennett, M.; Burritt, R. & Jasch, C. (Eds): Environmental Management Accounting for Cleaner Production. Dordrecht: Springer, 2008, 3-26 External links Cleaner Production by sectors Clean Production Council Chile Official site of the National Service that promotes Cleaner Production in that country. Journal of Cleaner Production National Pollution Prevention Roundtable Finds P2 Programs Effective (article) Pollution prevention in China Pollution prevention directory: TURI - Toxics Use Reduction Institute United States National Pollution Prevention Information Center United States Pollution Prevention Regional Information Center Waste minimisation Environmental engineering Industrial ecology Waste management concepts
Cleaner production
[ "Chemistry", "Engineering" ]
876
[ "Chemical engineering", "Industrial engineering", "Civil engineering", "Environmental engineering", "Industrial ecology" ]
5,401,806
https://en.wikipedia.org/wiki/Crystal%20Dam
Crystal Dam is a , double-curvature, concrete, thin arch dam located 6 miles downstream from Morrow Point Dam on the Gunnison River in Colorado, United States. Crystal Dam is the newest of the three dams in Curecanti National Recreation Area; construction on the dam was finished in 1976. The dam impounds Crystal Reservoir. Crystal Dam and Reservoir are part of the Bureau of Reclamation's Wayne N. Aspinall Unit of the Colorado River Storage Project, which retains the waters of the Gunnison River and its tributaries for agricultural and municipal use in the American Southwest. The dam's primary purpose is hydroelectric power generation. Description Crystal Dam, like the higher Morrow Point Dam farther upstream, is a thin-shell arch dam, primarily planned to generate hydroelectric power. Unlike its upstream companions, excess water spills over the top of the dam through a notched-out, ungated spillway that can create a waterfall in times of overflow. Under normal conditions the river flows through an diameter penstock to the 28 MW turbine. The dam is deep within the Black Canyon of the Gunnison in pre-Cambrian metamorphic rock. History Crystal Dam was the last of the three dams in the Aspinall Unit of the Colorado River Storage Project to be completed. Crystal Dam's design and construction lagged behind Morrow Point and Blue Mesa dams. Construction started in 1964 on a materials borrow pit, with construction at the damsite beginning in 1965 for an access road and exploratory drilling. Work then stopped for five years. Initially planned as an earth-fill dam, the design was changed to a double-curvature, thin-shell concrete arch dam. After an initial bidding process in which all bid were rejected as too high, a contract for the diversion tunnel was awarded in 1972, which was holed through the same year. The construction contract for the dam itself was awarded to the J.F. Shea Company in June 1973. Cofferdam work continued into 1974, encountering problems with leakage though the upstream cofferdam; wells were drilled below the cofferdam to intercept water. In the meantime, the dam foundation was excavated, with first concrete placement in June. Excavation and concrete work for the powerplant started the same year. Concrete work stopped in November, and resumed in April 1975. Work was behind schedule; the dam was supposed to be completed by December 1975. Concrete work resumed in April 1976, with final completion of the dam structure on August 30, 1976. Filling operations in the reservoir began on March 14, 1977, permanently blocking the diversion tunnel on April 12. The powerplant was not completed until 1978, the victim of a fire in the contractor's warehouse that destroyed many electrical components intended for the plant. Because of Crystal Dam's then-new design, and as a result of the failure of the contemporary Teton Dam in 1976, Crystal Dam was inspected in 1978 by divers to verify the integrity of the structure. References External links Crystal Dam at the Bureau of Reclamation Crystal Powerplant at the Bureau of Reclamation Wayne N. Aspinall Storage Unit at Curecanti National Recreation Area Dams in Colorado Buildings and structures in Montrose County, Colorado Arch dams United States Bureau of Reclamation dams Hydroelectric power plants in Colorado Colorado River Storage Project Curecanti National Recreation Area Dams completed in 1977 Energy infrastructure completed in 1977 Dams on the Gunnison River 1977 establishments in Colorado
Crystal Dam
[ "Engineering" ]
681
[ "Colorado River Storage Project" ]
5,402,159
https://en.wikipedia.org/wiki/Watoga%20State%20Park
Watoga State Park is a state park located near Seebert in Pocahontas County, West Virginia. The largest of West Virginia's state parks, it covers slightly over . Nearby parks include the Greenbrier River Trail, which is adjacent to the park, Beartown State Park, and Droop Mountain Battlefield State Park. Also immediately adjacent to the park is the 9,482-acre Calvin Price State Forest. It is one of the darkest night skies of all of West Virginia State Parks. History Watoga State Park’s name comes from the Cherokee word for “starry waters.” The land that forms the nucleus of Watoga was originally acquired in January 1925, when the park was initially planned to be a state forest. In May 1934, a decision was made to instead develop the site as a state park. Much of the development on the site was done by the Civilian Conservation Corps (CCC) and the park was first opened on July 1, 1937. Development of the park stopped during WWII, but after the war, work on the park resumed, and the first camping area opened in 1953, and eight deluxe cabins opened in 1956. Recreational use of the park increased during the 60s and 70s, requiring the addition of another camping area. Today, the park is supported by the Watoga State Park Foundation which promotes the recreation, conservation, ecology, history, and natural resources of the park. New Deal Resources in Watoga State Park Historic District The New Deal Resources in Watoga State Park Historic District is a national historic district encompassing 59 contributing buildings, 35 contributing structures, 2 contributing sites, and 11 contributing objects. They include water fountains; trails; a swimming pool; a reservoir; rental cabins; and picnic shelters; as well as a former CCC camp. The park is the site of the Fred E. Brooks Memorial Arboretum, a 400-acre arboretum that encompasses the drainage of Two Mile Run. Named in honor of Fred E. Brooks, a noted West Virginia naturalist who died in 1933, the Arboretum's construction began about 1935 and a dedication was held in 1938. It was listed on the National Register of Historic Places in 2010. Features 34 cabins 2 campgrounds with 88 total campsites (50 with electricity) Swimming pool fishing lake with boat rentals 37.5 miles of hiking trails Brooks Memorial Arboretum Ann Bailey Lookout Tower Greenbrier River Trail CCC Museum Picnic areas Hiking Trails Watoga State Park has many hiking trails to choose from that vary in length and difficulty. A small list of these trails includes Allegheny Trail Ann Bailey Trail Arrowhead Trail Bearpen Trail Brooks Memorial Arboretum Trails Buck and Doe Trail Burnside Ridge Trail Honeymoon Trail Jesse's Cove Trail Kennison Run Trail Lake Trail Monongaseneka Trail North Boundary Trail Pine Run Trail T. M. Cheek Trail Ten Acre Trail South Burnside Trail These trails are regularly maintained by the Watoga Foundation, and you can look at a map by clicking here. See also List of West Virginia state parks State park References External links West Virginia CCC information An entry by the International Dark-Sky Association National Register of Historic Places in Pocahontas County, West Virginia Historic districts in Pocahontas County, West Virginia History of West Virginia State parks of West Virginia Protected areas of Pocahontas County, West Virginia IUCN Category V Protected areas established in 1934 Civilian Conservation Corps in West Virginia Campgrounds in West Virginia Parks on the National Register of Historic Places in West Virginia Dark-sky preserves in the United States West Virginia placenames of Native American origin
Watoga State Park
[ "Astronomy" ]
714
[ "Dark-sky preserves in the United States", "Dark-sky preserves" ]
5,402,229
https://en.wikipedia.org/wiki/Vildagliptin
Vildagliptin, sold under the brand name Galvus and others, is an oral anti-hyperglycemic agent (anti-diabetic drug) of the dipeptidyl peptidase-4 (DPP-4) inhibitor class of drugs. Vildagliptin inhibits the inactivation of GLP-1 and GIP by DPP-4, allowing GLP-1 and GIP to potentiate the secretion of insulin in the beta cells and suppress glucagon release by the alpha cells of the islets of Langerhans in the pancreas. It was approved by the EMA in 2007. Vildagliptin has been shown to reduce hyperglycemia in type 2 diabetes mellitus. Combination with metformin The European Medicines Agency has also approved a combination of vildagliptin and metformin, vildagliptin/metformin (Eucreas by Novartis) as an oral treatment for type-2 diabetes. Adverse effects Adverse effects observed in clinical trials include nausea, hypoglycemia, tremor, headache and dizziness. Rare cases of hepatoxicity have been reported. There have been case reports of pancreatitis associated with DPP-4 inhibitors. A group at UCLA reported increased pre-cancerous pancreatic changes in rats and in human organ donors who had been treated with DPP-4 inhibitors. In response to these reports, the United States FDA and the European Medicines Agency each undertook independent reviews of all clinical and preclinical data related to the possible association of DPP-4 inhibitors with pancreatic cancer. In a joint letter to the New England Journal of Medicines, the agencies stated that "Both agencies agree that assertions concerning a causal association between incretin-based drugs and pancreatitis or pancreatic cancer, as expressed recently in the scientific literature and in the media, are inconsistent with the current data. The FDA and the EMA have not reached a final conclusion at this time regarding such a causal relationship. Although the totality of the data that have been reviewed provides reassurance, pancreatitis will continue to be considered a risk associated with these drugs until more data are available; both agencies continue to investigate this safety signal." See also Development of dipeptidyl peptidase-4 inhibitors Dipeptidyl peptidase-4 (CD26) References External links Dipeptidyl peptidase-4 inhibitors Pyrrolidines Nitriles Drugs developed by Novartis Carboxamides Adamantanes Tertiary alcohols
Vildagliptin
[ "Chemistry" ]
545
[ "Nitriles", "Functional groups" ]
5,402,608
https://en.wikipedia.org/wiki/Cooling%20vest
A cooling vest is a piece of specially made clothing designed to lower or stabilize body temperature and make exposure to warm climates or environments more bearable. Cooling vests are used by many athletes, construction workers, and welders, as well as individuals with multiple sclerosis, hypohidrotic ectodermal dysplasia, or various types of sports injuries. Types Cooling vests range in weight from around 1 to 3.5 kg, depending on the model. While many subtypes do exist, cooling vests fall into one of 5 primary types: Evaporative cooling vests are typically submersed in water for around 3 – 5 minutes and lightly wrung out or blot dried. They are usually worn outside the clothing and as the water in the vest interacts with specially treated cooling crystals or other cooling agents, the water evaporates which then causes body temperature to be reduced. They are lightweight, easy to use, and no electricity is required, making them perfect for people on the move. They are also the most affordable form of cooling vest. Ice chilled cooling vests make use of cooling energy packs that are activated inside of a freezer and then placed in pockets inside of the cooling vest. Because they are very cold to the touch, this type of cooling vest is always worn outside the clothes. A phase change material (PCM) cooling vest makes use of cooling packs that maintain much higher temperatures when refrigerated, frozen, or placed in water. These phase-change packs often contain liquids (typically nontoxic oils and fats) that solidify (like wax) typically between 55 and 65 degrees and usually last between 1.5 – 4 hours. This flexible cooled pack will then help reduce the wearers' temperature for up to 4 hours. A cool flow cooling vest makes use of a water flow system that pumps water through the vest using hoses. A thermoelectric cooling vest works on Peltier effect and cools down the inner surface of the vest. It is powered by a portable battery. Uses The effects of cooling vests on athletes to improve their performance has been evaluated on several occasions; at the 2004 Summer Olympics several Americans and Australians were fitted with cooling vests supplied by Nike, used prior to their events. Cooling vests are also used by persons with multiple sclerosis. In multiple sclerosis, nerve fibers become demyelinated which leads to pain and discomfort when temperature is elevated. Nerve fibers may also be remyelinating or in the process of repairing themselves, and still be sensitive to elevated temperatures. The cooling vest keeps the patient's temperature down, reducing the pain symptoms. In 2005, a 12-week study at the University of Buffalo was funded by the National Institute on Disability and Rehabilitation Research, a division of the U.S. Department of Education, to determine if people with multiple sclerosis could exercise longer with the help of a cooling vest. Cooling vests are also used by large workforces in the industrial markets from construction to oil and gas. In 2018, the Supreme Committee for Delivery & Legacy developed a state of the art cooling suit using evaporative cooling technology to help its 30,000 workforce complete the building of the Qatar 2022 World Cup Stadiums. This is believed to be the largest and most significant deployment of cooling workwear delivered and consulted by TechNiche UK. References Sports equipment Sports technology Medical equipment Vests
Cooling vest
[ "Biology" ]
701
[ "Medical equipment", "Medical technology" ]
5,402,671
https://en.wikipedia.org/wiki/Pierre%20Scerri
Pierre Scerri is a French telecommunications engineer and model builder, who gained fame in 1998 after having his highly accurate 1:3 scale model of a Ferrari 312 PB featured on the BBC television programme Jeremy Clarkson's Extreme Machines. He began his project for the model in 1978, out of desire for having a Ferrari that could function in his dining room. Pierre Bardinon, owner of the Mas du Clos race track, allowed Scerri to take detailed photographs of the actual car on display at the adjacent Ferrari museum. Based on those photographs, he drafted the schematics and made the molds for all parts of the model, a process which took 15 years. In 1989, he finally completed assembly of the engine, a perfect scaled replica of the Flat-12 cylinder engine found on the 312PB. He reportedly took extra time tuning the engine so that it would sound like the full-scale model. The project was finally completed in December 1992. Scerri is now working on three new models, a Ferrari 330 P4, another Ferrari 312PB and an engine for a Ferrari 250 GTO, all 1:3 scale. References External links Pierre Scerri's website Fine Art Models (through web.archive.org) YouTube Video of Pierre Scerri's appearance on Jeremy Clarkson's Extreme Machines French engineers Living people Year of birth missing (living people) Scale modeling Model makers
Pierre Scerri
[ "Physics" ]
289
[ "Model makers", "Scale modeling" ]
5,402,932
https://en.wikipedia.org/wiki/Megaspore
Megaspores, also called macrospores, are a type of spore that is present in heterosporous plants. These plants have two spore types, megaspores and microspores. Generally speaking, the megaspore, or large spore, germinates into a female gametophyte, which produces egg cells. These are fertilized by sperm produced by the male gametophyte developing from the microspore. Heterosporous plants include seed plants (gymnosperms and flowering plants), water ferns (Salviniales), spikemosses (Selaginellaceae) and quillworts (Isoetaceae). Megasporogenesis In gymnosperms and flowering plants, the megaspore is produced inside the nucellus of the ovule. During megasporogenesis, a diploid precursor cell, the megasporocyte or megaspore mother cell, undergoes meiosis to produce initially four haploid cells (the megaspores). Angiosperms exhibit three patterns of megasporogenesis: monosporic, bisporic, and tetrasporic, also known as the Polygonum type, the Alisma type, and the Drusa type, respectively. The monosporic pattern occurs most frequently (>70% of angiosperms) and is found in many economically and biologically important groups such as Brassicaceae (e.g., Arabidopsis, Capsella, Brassica), Gramineae (e.g., maize, rice, wheat), Malvaceae (e.g., cotton), Leguminoseae (e.g., beans, soybean), and Solanaceae (e.g., pepper, tobacco, tomato, potato, petunia). This pattern is characterized by cell plate formation after meiosis 1 & 2, which results in four one-nucleate megaspores, of which three degenerate. The bisporic pattern is characterized by cell plate formation only after meiosis 1, and results in two two-nucleate megaspores, of which one degenerates. The tetrasporic pattern is characterized by cell plates failing to form after either meiosis 1 or 2, and results in one four-nucleate megaspore. Therefore, each pattern gives rise to a single functional megaspore which contains one, two, or four meiotic nuclei, respectively. The megaspore then undergoes megagametogenesis to give rise to the female gametophyte. Megagametogenesis After megasporogenesis, the megaspore develops into the female gametophyte (the embryo sac) in a process called megagametogenesis. The process of megagametogenesis varies depending on which pattern of megasporogenesis occurred. Some species, such as Tridax trilobata, Ehretia laevis, and Alectra thomsoni, can undergo different patterns of megasporogenesis and therefore different patterns of megagametogenesis. If the monosporic pattern occurred, the single nucleus undergoes mitosis three times, producing an eight-nucleate cell. These eight nuclei are arranged into two groups of four. These groups both send a nucleus to the center of the cell; these become the polar nuclei. Depending on the species, these nuclei fuse before or upon fertilization of the central cell. The three nuclei at the end of the cell near the micropylar become the egg apparatus, with an egg cell in the center and two synergids. At the other end of the cell, a cell wall forms around the nuclei and forms the antipodals. Therefore, the resulting embryo sac is a seven-celled structure consisting of one central cell, one egg cell, two synergid cells, and three antipodal cells. The bisporic and tetrasporic patterns undergo varying processes and result in varying embryo sacs as well. In Lilium which has a tetrasporic pattern, the central cell of the embryo sac is 4n. Therefore, upon fertilization the endosperm will be 5n rather than the typical 3n. See also Megasporangium Microspore Spore Double fertilization References Plant development Fertility Reproduction Plant sexuality
Megaspore
[ "Biology" ]
900
[ "Behavior", "Plant sexuality", "Reproduction", "Biological interactions", "Sexuality" ]
5,404,002
https://en.wikipedia.org/wiki/Tufting
Tufting is a type of textile manufacturing in which a thread is inserted on a primary base. It is an ancient technique for making warm garments, especially mittens. After the knitting is done, short U-shaped loops of extra yarn are introduced through the fabric from the outside so that their ends point inwards (e.g., towards the hand inside the mitten). Usually, the tuft yarns form a regular array of "dots" on the outside, sometimes in a contrasting color (e.g., white on red). On the inside, the tuft yarns may be tied for security, although they need not be. The ends of the tuft yarns are then frayed, so that they will subsequently felt, creating a dense, insulating layer within the knitted garment. Tufting was first developed by carpet manufacturers in Dalton, Georgia. A tufted piece is completed in three steps: tufting, gluing, then backing and finishing. When tufting, the work is completed from the backside of the finished piece. A loop-pile machine sends yarn through the primary backing and leaves the loops uncut. A cut-pile machine produces plush or shaggy carpet by cutting the yarn as it comes through to the front of the piece. Tufted rugs can be made with coloured yarn to create a design, or plain yarn can be tufted and then dyed in a separate process. A tufting gun is a tool commonly used to automate the tufting process, more specifically in the realm of rug making. The yarn is fed through a hollow needle, that penetrates the stretched cloth backing for a modifiable length. They can usually create two types of rugs, a cut or loop pile. A cut pile rug's yarn is snipped every other loop into the backing, creating a “U” shape from the side profile, while a loop pile rug isn't snipped and creates a continuous “M” or “W”. Tufting guns are useful tools for both mass production and home use due to its flexibility in scale and color variation. Materials Tufting requires the use of specialised primary backing fabric, which is often composed of woven polypropylene. Primary backing fabric is produced with a range of densities and weaving styles, allowing for use with different gauges of needles. Primary backing fabric must be stretched tightly to the frame so that it is stable enough to withstand the pressure of the tufting gun and taut enough for the yarn to be held in place. Tufting frames are generally constructed of wood, with carpet tacks or grippers around the edge to hold the primary backing fabric in place. Eye hooks are an important addition to a tufting frame, they are used as yarn feeders and work to keep the tension consistent. The frame must be sturdy and can be either freestanding or clamped to a table top. It is important to keep pressure and speed consistent when tufting so that the amount of yarn per square inch of the fabric is consistent. Any mistakes in the design can be corrected throughout the tufting process by simply pulling out yarn strands from the primary backing fabric and re-tufting the area. Designs can drawn directly on to the primary backing fabric, this can be done freehand or with the aid of a projector. After tufting is completed, the tufted piece requires a coat of latex glue on the back in order to keep the tufts anchored in their place. Latex glue is beneficial for tufted pieces as it provides flexibility and dimensional stability. The piece should remain stretched on the frame until the glue has finished drying to avoid loss of shape and the possibility of mildew. A secondary backing layer is then applied, providing further dimensional stability and protection for the finished piece as well as improving its appearance. A wide variety of materials can be used for the secondary backing fabric depending on the intended use of the piece. Felt, canvas, drill and other harder wearing materials can be used for floor rugs, however backing fabric for wall hangings need only be aesthetic, as it is only required to cover up the glue layer and does not need to be hard wearing. Wool is the traditional fibre used in pile tufting and is considered to be a high-quality material, especially for pieces designed to be used in high-traffic areas. Wool can be spun into yarn by two systems, either woollen or worsted. Worsted yarn is more favourable for tufting when the finished product will be used in high-traffic areas, as it produces a hard flat surface that is tightly woven together. This is due to the tightly wound, fine yarn which is created in the worsted process. In comparison, woollen yarn used in tufting encases more entrapped air in the finished product and a bulkier finish. Different yarn fibres can be used depending on the final use of the tufted object and the desired effect. Cotton and acrylic yarns are also commonly used, and decorative yarns may be used for wall hangings or other decorative tufting projects. Yarn should be spun onto cones before tufting to ensure it unwinds consistently and without tangles. Either a single strand or multiple strands of yarn can be used, depending on the thickness of the yarn and the gauge of the needle. Tools There are two types of tufting guns, manual or electric. A tufting gun is a handheld machine where yarn is fed through a needle and subsequently punched in rapid succession through a backing fabric, either with or without scissors. Electric tufting guns can be cut-pile, loop-pile, or a combination of both and are able to produce multiple pile heights. A similar effect can be achieved with punch needle embroidery or rug hooking. The choice between a cut pile and a loop pile lies in the distinctive characteristics they offer. Cut pile tufting creates rugs with a loose, hairy texture, while loop pile tufting produces rugs with tight, connected loops, resulting in a trackless surface. Cut pile rugs are softer but require carpet glue for stability. In contrast, loop pile rugs does not have to be glued. After tufting, the pile can be sheared or cut using electric shearers or scissors to tidy and sculpt the yarn for the finished product. This can be done either before or after the latex glue is applied to the backing. This process also helps to remove any loose fibres which may have come to the surface during the tufting process. Other equipment Projector: For replicating specific designs accurately, an projector can project images onto backing fabric, aiding precision to the whole project. Yarn Cones or Feeders: These tools help in storing and managing your yarn efficiently while tufting. They are used mainly for the tangle-free feeding process. Yarn Swift: Yarn swift turns the skeins of yarn into cones for easier feeding into the tufting process. Yarn Threading Needle: A yarn threading needle assists in threading and maneuvering yarn through tight spaces, ensuring precise tufting. Rug Trimmer: Formerly known as a hair trimmer, this tool is essential for precise trimming of your finished rug. Difference from the hair trimmer is that rug trimmers have higher cutting revolutions and so the process of rug finishing is much faster and smoother. Scissors: In addition to shears, regular scissors can be handy for various cutting needs during the tufting process. Electric Scissors: Electric scissors can expedite the cutting process, making it more efficient. Electric scissors are also used to sculpt the surface. Glue Gun: A glue gun is mainly used for attaching excess backing material to the back of the rug. Frames: Necessary to stretch your backing cloth vertically, preventing it from rolling up or getting caught in the yarn. Carpet Grippers: Attached around the edge of the tufting frame, carpet grippers hold the primary backing fabric in place, ensuring stability during the tufting process. Oil: Sewing oil is a vital component for maintenance, ensuring the smooth equipment operation over time. The diverse set of equipment mentioned above plays a crucial role in the rug-making process, with each tool serving a distinct purpose. Collectively, these tools contribute to the tufting and rug making process. Cleaning and maintenance Tufting guns must be regularly cleaned and maintained to prevent damage. Regularly removing excess yarn fluff that gathers around the needle and gears helps the mechanism to move without and excess friction. In order to avoid wear and ensure the mechanisms can function smoothly, lubricating oil should be regularly applied to the machine. Tufted rugs can be cleaned regularly with a vacuum to remove dirt, however spills or stains should be spot cleaned immediately. Popularity Tufting has seen a rise in popularity since 2018, when Tim Eads started an online community for tufting and made electric tufting guns easily accessible. Tufting produces both practical and decorative pieces with many uses and effects. The short format of TikTok and Instagram reels lends itself well to the process of tufting, providing a platform for the textile artform to reach a wider audience. The increase in popularity online has also seen a rise in copyrighted images being recreated without permission. Environmental impact and effects Recycling tufted pieces can be difficult as they are typically made up of three layers, which can require additional energy to break down into their individual components. Processed waste from tufting can be turned into many things, including cushion stuffing, as concrete reinforcement or as modifiers in asphalt mixtures. Tufted pieces, such as rugs or wall hangings, provide acoustic properties which can minimize noise and absorb airborne sounds. They also provide thermal comfort when walking on tufted rugs with bare feet, and larger pieces provide insulation which may reduce the cost of heating. Rugs or wall hangings made from wool fibres have been shown to improve air quality in indoor spaces. Wool acts as a filter through which contaminants such as sulphur dioxide and nitrogen are absorbed. Wool is also a highly absorbent fibre and can help manage humidity changes indoors. Tufted carpets and rugs provide a safe surface to walk on, offering slip resistance and a more forgiving surface should objects be dropped or falls occur. Wool carpets are also resistant to flammability and hide soil and other dirts well. Tufted pieces made from nylon yarn may face colour degradation over time if exposed to excess sunlight. References More on tufting rug: https://www.firstrug.com www.ilovetuft.fr Knitting Design Crafts Textile arts
Tufting
[ "Engineering" ]
2,141
[ "Design" ]
5,404,101
https://en.wikipedia.org/wiki/List%20of%20countries%20by%20steel%20production
In 2023, total world crude steel production was nearly 1.9 billion tons (Gt). The biggest steel producing country is currently China, which accounted for 54% of world steel production in 2023. In 2020, despite the COVID-19 pandemic, China became the first country to produce over one billion tons of steel. In 2008, 2009, 2015 and 2016 output fell in the majority of steel-producing countries as a result of the global recession. In 2010 and 2017, it started to rise again. Crude steel production contracted in all regions in 2019 except in Asia and the Middle East. India is the 2nd leading producer of iron and steel industries. Steel production This is a list of countries by steel production in 1967, 1980, 1990, 2000 and from 2007 to 2021, based on data provided by the World Steel Association. All countries with annual production of crude steel at least 2 million metric tons. Exports net: exports - imports Imports Net: imports − exports World steel production trend Development of the worldwide production of steel in millions of tons. See also Steel industry Global steel industry trends List of steel producers List of countries by iron ore production References External links World Steel Association American Iron and Steel Institute Steel industry by country Lists of countries by production
List of countries by steel production
[ "Chemistry" ]
254
[ "Steel industry by country", "Metallurgical industry by country" ]
5,404,610
https://en.wikipedia.org/wiki/Open%E2%80%93closed%20principle
In object-oriented programming, the open–closed principle (OCP) states "software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification"; that is, such an entity can allow its behaviour to be extended without modifying its source code. The name open–closed principle has been used in two ways. Both ways use generalizations (for instance, inheritance or delegate functions) to resolve the apparent dilemma, but the goals, techniques, and results are different. The open–closed principle is one of the five SOLID principles of object-oriented design. Meyer's open–closed principle Bertrand Meyer is generally credited for having originated the term open–closed principle, which appeared in his 1988 book Object-Oriented Software Construction. A module will be said to be open if it is still available for extension. For example, it should be possible to add fields to the data structures it contains, or new elements to the set of functions it performs. A module will be said to be closed if [it] is available for use by other modules. This assumes that the module has been given a well-defined, stable description (the interface in the sense of information hiding). At the time Meyer was writing, adding fields or functions to a library inevitably required changes to any programs depending on that library. Meyer's proposed solution to this problem relied on the notion of object-oriented inheritance (specifically implementation inheritance): A class is closed, since it may be compiled, stored in a library, baselined, and used by client classes. But it is also open, since any new class may use it as parent, adding new features. When a descendant class is defined, there is no need to change the original or to disturb its clients. Polymorphic open–closed principle During the 1990s, the open–closed principle became popularly redefined to refer to the use of abstracted interfaces, where the implementations can be changed and multiple implementations could be created and polymorphically substituted for each other. In contrast to Meyer's usage, this definition advocates inheritance from abstract base classes. Interface specifications can be reused through inheritance but implementation need not be. The existing interface is closed to modifications and new implementations must, at a minimum, implement that interface. Robert C. Martin's 1996 article "The Open-Closed Principle" was one of the seminal writings to take this approach. In 2001, Craig Larman related the open–closed principle to the pattern by Alistair Cockburn called Protected Variations, and to the David Parnas discussion of information hiding. See also SOLID – the "O" in "SOLID" represents the open–closed principle Robustness principle References External links The Principles of OOD The Open/Closed Principle: Concerns about Change in Software Design The Open-Closed Principle -- and What Hides Behind It Object-oriented programming Type theory Software design Programming principles Software development philosophies
Open–closed principle
[ "Mathematics", "Engineering" ]
591
[ "Mathematical structures", "Mathematical logic", "Mathematical objects", "Type theory", "Design", "Software design" ]
5,405,018
https://en.wikipedia.org/wiki/15%20and%20290%20theorems
In mathematics, the 15 theorem or Conway–Schneeberger Fifteen Theorem, proved by John H. Conway and W. A. Schneeberger in 1993, states that if a positive definite quadratic form with integer matrix represents all positive integers up to 15, then it represents all positive integers. The proof was complicated, and was never published. Manjul Bhargava found a much simpler proof which was published in 2000. Bhargava used the occasion of his receiving the 2005 SASTRA Ramanujan Prize to announce that he and Jonathan P. Hanke had cracked Conway's conjecture that a similar theorem holds for integral quadratic forms, with the constant 15 replaced by 290. The proof has since appeared in preprint form. Details Suppose is a symmetric matrix with real entries. For any vector with integer components, define This function is called a quadratic form. We say is positive definite if whenever . If is always an integer, we call the function an integral quadratic form. We get an integral quadratic form whenever the matrix entries are integers; then is said to have integer matrix. However, will still be an integral quadratic form if the off-diagonal entries are integers divided by 2, while the diagonal entries are integers. For example, x2 + xy + y2 is integral but does not have integral matrix. A positive integral quadratic form taking all positive integers as values is called universal. The 15 theorem says that a quadratic form with integer matrix is universal if it takes the numbers from 1 to 15 as values. A more precise version says that, if a positive definite quadratic form with integral matrix takes the values 1, 2, 3, 5, 6, 7, 10, 14, 15 , then it takes all positive integers as values. Moreover, for each of these 9 numbers, there is such a quadratic form taking all other 8 positive integers except for this number as values. For example, the quadratic form is universal, because every positive integer can be written as a sum of 4 squares, by Lagrange's four-square theorem. By the 15 theorem, to verify this, it is sufficient to check that every positive integer up to 15 is a sum of 4 squares. (This does not give an alternative proof of Lagrange's theorem, because Lagrange's theorem is used in the proof of the 15 theorem.) On the other hand, is a positive definite quadratic form with integral matrix that takes as values all positive integers other than 15. The 290 theorem says a positive definite integral quadratic form is universal if it takes the numbers from 1 to 290 as values. A more precise version states that, if an integer valued integral quadratic form represents all the numbers 1, 2, 3, 5, 6, 7, 10, 13, 14, 15, 17, 19, 21, 22, 23, 26, 29, 30, 31, 34, 35, 37, 42, 58, 93, 110, 145, 203, 290 , then it represents all positive integers, and for each of these 29 numbers, there is such a quadratic form representing all other 28 positive integers with the exception of this one number. Bhargava has found analogous criteria for a quadratic form with integral matrix to represent all primes (the set {2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 67, 73} ) and for such a quadratic form to represent all positive odd integers (the set {1, 3, 5, 7, 11, 15, 33} ). Expository accounts of these results have been written by Hahn and Moon (who provides proofs). References Additive number theory Theorems in number theory Quadratic forms
15 and 290 theorems
[ "Mathematics" ]
782
[ "Mathematical theorems", "Quadratic forms", "Theorems in number theory", "Mathematical problems", "Number theory" ]
5,405,317
https://en.wikipedia.org/wiki/Westringia%20fruticosa
Westringia fruticosa, the coastal rosemary or coastal westringia, is a shrub that grows near the coast in eastern Australia. Description The flowers are white, hairy and have the upper petal divided into two lobes. They also have orange-to-purply spots on their bottom half. This shrub is very tough and grows on cliffs right next to the ocean. Cultivation The plant's tolerance to a variety of soils, the neatly whorled leaves and all-year flowering make it very popular in cultivation. It (or its cultivar(s)) is a recipient of the Royal Horticultural Society's Award of Garden Merit. Gallery References fruticosa Flora of New South Wales Lamiales of Australia Taxa named by Carl Ludwig Willdenow Plants described in 1797 Plants that can bloom all year round
Westringia fruticosa
[ "Biology" ]
169
[ "Plants that can bloom all year round", "Plants" ]
5,405,595
https://en.wikipedia.org/wiki/Heath-Brown%E2%80%93Moroz%20constant
The Heath-Brown–Moroz constant C, named for Roger Heath-Brown and Boris Moroz, is defined as where p runs over the primes. Application This constant is part of an asymptotic estimate for the distribution of rational points of bounded height on the cubic surface X03=X1X2X3. Let H be a positive real number and N(H) the number of solutions to the equation X03=X1X2X3 with all the Xi non-negative integers less than or equal to H and their greatest common divisor equal to 1. Then . References External links Wolfram Mathworld's article Mathematical constants Infinite products
Heath-Brown–Moroz constant
[ "Mathematics" ]
141
[ "Mathematical analysis", "Number theory stubs", "Mathematical objects", "Number stubs", "nan", "Infinite products", "Mathematical constants", "Numbers", "Number theory" ]
5,405,675
https://en.wikipedia.org/wiki/Bunder
A bunder or bonnier is an obsolete unit of area previously used in the Low Countries (Belgium and the Netherlands). References Bunder at sizes.com Units of area Dutch words and phrases Metricated units
Bunder
[ "Mathematics" ]
44
[ "Metricated units", "Quantity", "Units of area", "Units of measurement" ]
5,406,090
https://en.wikipedia.org/wiki/Sippenhaft
Sippenhaft or Sippenhaftung (, kin liability) is a German term for the idea that a family or clan shares the responsibility for a crime or act committed by one of its members, justifying collective punishment. As a legal principle, it was derived from Germanic law in the Middle Ages, usually in the form of fines and compensations. It was adopted by Nazi Germany to justify the punishment of kin (relatives, spouse) for the offence of a family member. Punishment often involved imprisonment and execution, and was applied to relatives of the conspirators of the failed 1944 bomb plot to assassinate Hitler. Origins Prior to the adoption of Roman law and Christianity, Sippenhaft was a common legal principle among Germanic peoples, including Anglo-Saxons and Scandinavians. Germanic laws distinguished between two forms of justice for severe crimes such as murder: blood revenge, or extrajudicial killing; and blood money, pecuniary restitution or fines in lieu of revenge, based on the weregild or "man price" determined by the victim's wealth and social status. The principle of Sippenhaft meant that the family or clan of an offender, as well as the offender, could be subject to revenge or could be liable to pay restitution. Similar principles were common to Celts, Teutons, and Slavs. Nazi Germany In Nazi Germany, the term was revived to justify the punishment of kin (relatives, spouse) for the offence of a family member. In that form of Sippenhaft, the relatives of persons accused of crimes against the state were held to share the responsibility for those crimes and subject to arrest and sometimes execution. 1943–45: for desertion and treason Examples of Sippenhaft being used as a threat exist within the Wehrmacht from around 1943. Soldiers accused of having "blood impurities" or soldiers conscripted from outside of Germany also began to have their families threatened and punished with Sippenhaft. An example is the case of Panzergrenadier Wenzeslaus Leiss, who was accused of desertion on the Eastern Front in December 1942. After the Düsseldorf Gestapo discovered supposed Polish links in the Leiss family, in February 1943 his wife, two-year-old daughter, two brothers, sister and brother-in-law were arrested and executed at Sachsenhausen concentration camp. By 1944, several general and individual directives were ordered within divisions and corps, threatening troops with consequences against their families. Families of 20 July plotters Many people who had committed no crimes were arrested and punished under Sippenhaft decrees introduced after the failed 20 July plot to assassinate Adolf Hitler in July 1944.After the failure of the 20 July plot, the SS chief Heinrich Himmler told a meeting of Gauleiters in Posen that he would "introduce absolute responsibility of kin ... a very old custom practiced among our forefathers". According to Himmler, this practice had existed among the ancient Teutons. "When they placed a family under the ban and declared it outlawed or when there was a blood feud in the family, they were utterly consistent. ... This man has committed treason; his blood is bad; there is traitor's blood in him; that must be wiped out. And in the blood feud the entire clan was wiped out down to the last member. And so, too, will Count Stauffenberg's family be wiped out down to the last member." Accordingly, the members of the family of von Stauffenberg (the officer who had planted the bomb that failed to kill Hitler) were all under suspicion. His wife, Nina Schenk Gräfin von Stauffenberg, was sent to Ravensbrück concentration camp (she survived and lived until 2006). His brother Alexander, who knew nothing of the plot and was serving with the Wehrmacht in Greece, was also sent to a concentration camp. Similar punishments were meted out to the relatives of Carl Goerdeler, Henning von Tresckow, Adam von Trott zu Solz and many other conspirators. Erwin Rommel opted to commit suicide, rather than being tried for his suspected role in the plot, in part because he knew that his wife and children would suffer well before his own all-but-certain conviction and execution. 1944–45: Soviet POW "League of German Officers" After the 20 July plot, numerous families connected to the Soviet-sponsored League of German Officers made up of German prisoners of war, such as those of Walther von Seydlitz-Kurzbach and Friedrich Paulus, were also arrested. Unlike a number of the 20 July conspirators families, those arrested for connection to the League were not released after a few months but remained in prison until the end of the war. Younger children of arrested plotters were not jailed but sent to orphanages under new names. Stauffenberg's children were renamed "Meister". 1944–45: for "cowardice" After 20 July 1944 these threats were extended to include all German troops, in particular, German commanders. A decree of February 1945 threatened death to the relatives of military commanders who showed what Hitler regarded as cowardice or defeatism in the face of the enemy. After the surrender of Königsberg to the Soviets in April 1945, the family of the German commander General Otto Lasch were arrested. These arrests were publicized in the Völkischer Beobachter. Present legal status The principle of Sippenhaftung is considered incompatible with German Basic Law, and therefore has no legal definition. See also Ancestral sin Bloodline theory German collective guilt Family members of traitors to the Motherland – kin punishment practiced in Soviet Russia Gjakmarrja Glossary of Nazi Germany Guilt by association Kin punishment Lidice massacre Nine familial exterminations (zú zhū (族誅), literally "family execution", and miè zú (灭族/滅族)) – kin punishment in ancient China References Further reading Dagmar Albrecht: Mit meinem Schicksal kann ich nicht hadern. Sippenhaft in der Familie Albrecht von Hagen. Dietz, Berlin 2001, . Harald Maihold: Die Sippenhaft: Begründete Zweifel an einem Grundsatz des „deutschen Rechts“. In: Mediaevistik. Band 18, 2005, S. 99–126 (PDF; 152 KB) Anglo-Saxon law Collective punishment Determinism Early Germanic law Family in early Germanic culture Genetic fallacies German words and phrases Kinship and descent Legal history of Germany Political and cultural purges Victims of familial execution
Sippenhaft
[ "Biology" ]
1,367
[ "Behavior", "Human behavior", "Kinship and descent" ]
5,406,474
https://en.wikipedia.org/wiki/Consensus%20%28computer%20science%29
A fundamental problem in distributed computing and multi-agent systems is to achieve overall system reliability in the presence of a number of faulty processes. This often requires coordinating processes to reach consensus, or agree on some data value that is needed during computation. Example applications of consensus include agreeing on what transactions to commit to a database in which order, state machine replication, and atomic broadcasts. Real-world applications often requiring consensus include cloud computing, clock synchronization, PageRank, opinion formation, smart power grids, state estimation, control of UAVs (and multiple robots/agents in general), load balancing, blockchain, and others. Problem description The consensus problem requires agreement among a number of processes (or agents) on a single data value. Some of the processes (agents) may fail or be unreliable in other ways, so consensus protocols must be fault-tolerant or resilient. The processes must put forth their candidate values, communicate with one another, and agree on a single consensus value. The consensus problem is a fundamental problem in controlling multi-agent systems. One approach to generating consensus is for all processes (agents) to agree on a majority value. In this context, a majority requires at least one more than half of the available votes (where each process is given a vote). However, one or more faulty processes may skew the resultant outcome such that consensus may not be reached or may be reached incorrectly. Protocols that solve consensus problems are designed to deal with a limited number of faulty processes. These protocols must satisfy several requirements to be useful. For instance, a trivial protocol could have all processes output binary value 1. This is not useful; thus, the requirement is modified such that the production must depend on the input. That is, the output value of a consensus protocol must be the input value of some process. Another requirement is that a process may decide upon an output value only once, and this decision is irrevocable. A method is correct in an execution if it does not experience a failure. A consensus protocol tolerating halting failures must satisfy the following properties. Termination Eventually, every correct process decides some value. Integrity If all the correct processes proposed the same value , then any correct process must decide . Agreement Every correct process must agree on the same value. Variations on the definition of integrity may be appropriate, according to the application. For example, a weaker type of integrity would be for the decision value to equal a value that some correct process proposed – not necessarily all of them. There is also a condition known as validity in the literature which refers to the property that a message sent by a process must be delivered. A protocol that can correctly guarantee consensus amongst n processes of which at most t fail is said to be t-resilient. In evaluating the performance of consensus protocols two factors of interest are running time and message complexity. Running time is given in Big O notation in the number of rounds of message exchange as a function of some input parameters (typically the number of processes and/or the size of the input domain). Message complexity refers to the amount of message traffic that is generated by the protocol. Other factors may include memory usage and the size of messages. Models of computation Varying models of computation may define a "consensus problem". Some models may deal with fully connected graphs, while others may deal with rings and trees. In some models message authentication is allowed, whereas in others processes are completely anonymous. Shared memory models in which processes communicate by accessing objects in shared memory are also an important area of research. Communication channels with direct or transferable authentication In most models of communication protocol participants communicate through authenticated channels. This means that messages are not anonymous, and receivers know the source of every message they receive. Some models assume a stronger, transferable form of authentication, where each message is signed by the sender, so that a receiver knows not just the immediate source of every message, but the participant that initially created the message. This stronger type of authentication is achieved by digital signatures, and when this stronger form of authentication is available, protocols can tolerate a larger number of faults. The two different authentication models are often called oral communication and written communication models. In an oral communication model, the immediate source of information is known, whereas in stronger, written communication models, every step along the receiver learns not just the immediate source of the message, but the communication history of the message. Inputs and outputs of consensus In the most traditional single-value consensus protocols such as Paxos, cooperating nodes agree on a single value such as an integer, which may be of variable size so as to encode useful metadata such as a transaction committed to a database. A special case of the single-value consensus problem, called binary consensus, restricts the input, and hence the output domain, to a single binary digit {0,1}. While not highly useful by themselves, binary consensus protocols are often useful as building blocks in more general consensus protocols, especially for asynchronous consensus. In multi-valued consensus protocols such as Multi-Paxos and Raft, the goal is to agree on not just a single value but a series of values over time, forming a progressively-growing history. While multi-valued consensus may be achieved naively by running multiple iterations of a single-valued consensus protocol in succession, many optimizations and other considerations such as reconfiguration support can make multi-valued consensus protocols more efficient in practice. Crash and Byzantine failures There are two types of failures a process may undergo, a crash failure or a Byzantine failure. A crash failure occurs when a process abruptly stops and does not resume. Byzantine failures are failures in which absolutely no conditions are imposed. For example, they may occur as a result of the malicious actions of an adversary. A process that experiences a Byzantine failure may send contradictory or conflicting data to other processes, or it may sleep and then resume activity after a lengthy delay. Of the two types of failures, Byzantine failures are far more disruptive. Thus, a consensus protocol tolerating Byzantine failures must be resilient to every possible error that can occur. A stronger version of consensus tolerating Byzantine failures is given by strengthening the Integrity constraint: IntegrityIf a correct process decides , then must have been proposed by some correct process. Asynchronous and synchronous systems The consensus problem may be considered in the case of asynchronous or synchronous systems. While real world communications are often inherently asynchronous, it is more practical and often easier to model synchronous systems, given that asynchronous systems naturally involve more issues than synchronous ones. In synchronous systems, it is assumed that all communications proceed in rounds. In one round, a process may send all the messages it requires, while receiving all messages from other processes. In this manner, no message from one round may influence any messages sent within the same round. The FLP impossibility result for asynchronous deterministic consensus In a fully asynchronous message-passing distributed system, in which at least one process may have a crash failure, it has been proven in the famous 1985 FLP impossibility result by Fischer, Lynch and Paterson that a deterministic algorithm for achieving consensus is impossible. This impossibility result derives from worst-case scheduling scenarios, which are unlikely to occur in practice except in adversarial situations such as an intelligent denial-of-service attacker in the network. In most normal situations, process scheduling has a degree of natural randomness. In an asynchronous model, some forms of failures can be handled by a synchronous consensus protocol. For instance, the loss of a communication link may be modeled as a process which has suffered a Byzantine failure. Randomized consensus algorithms can circumvent the FLP impossibility result by achieving both safety and liveness with overwhelming probability, even under worst-case scheduling scenarios such as an intelligent denial-of-service attacker in the network. Permissioned versus permissionless consensus Consensus algorithms traditionally assume that the set of participating nodes is fixed and given at the outset: that is, that some prior (manual or automatic) configuration process has permissioned a particular known group of participants who can authenticate each other as members of the group. In the absence of such a well-defined, closed group with authenticated members, a Sybil attack against an open consensus group can defeat even a Byzantine consensus algorithm, simply by creating enough virtual participants to overwhelm the fault tolerance threshold. A permissionless consensus protocol, in contrast, allows anyone in the network to join dynamically and participate without prior permission, but instead imposes a different form of artificial cost or barrier to entry to mitigate the Sybil attack threat. Bitcoin introduced the first permissionless consensus protocol using proof of work and a difficulty adjustment function, in which participants compete to solve cryptographic hash puzzles, and probabilistically earn the right to commit blocks and earn associated rewards in proportion to their invested computational effort. Motivated in part by the high energy cost of this approach, subsequent permissionless consensus protocols have proposed or adopted other alternative participation rules for Sybil attack protection, such as proof of stake, proof of space, and proof of authority. Equivalency of agreement problems Three agreement problems of interest are as follows. Terminating Reliable Broadcast A collection of processes, numbered from to communicate by sending messages to one another. Process must transmit a value to all processes such that: if process is correct, then every correct process receives for any two correct processes, each process receives the same value. It is also known as The General's Problem. Consensus Formal requirements for a consensus protocol may include: Agreement: All correct processes must agree on the same value. Weak validity: For each correct process, its output must be the input of some correct process. Strong validity: If all correct processes receive the same input value, then they must all output that value. Termination: All processes must eventually decide on an output value Weak Interactive Consistency For n processes in a partially synchronous system (the system alternates between good and bad periods of synchrony), each process chooses a private value. The processes communicate with each other by rounds to determine a public value and generate a consensus vector with the following requirements: if a correct process sends , then all correct processes receive either or nothing (integrity property) all messages sent in a round by a correct process are received in the same round by all correct processes (consistency property). It can be shown that variations of these problems are equivalent in that the solution for a problem in one type of model may be the solution for another problem in another type of model. For example, a solution to the Weak Byzantine General problem in a synchronous authenticated message passing model leads to a solution for Weak Interactive Consistency. An interactive consistency algorithm can solve the consensus problem by having each process choose the majority value in its consensus vector as its consensus value. Solvability results for some agreement problems There is a t-resilient anonymous synchronous protocol which solves the Byzantine Generals problem, if and the Weak Byzantine Generals case where is the number of failures and is the number of processes. For systems with processors, of which are Byzantine, it has been shown that there exists no algorithm that solves the consensus problem for in the oral-messages model. The proof is constructed by first showing the impossibility for the three-node case and using this result to argue about partitions of processors. In the written-messages model there are protocols that can tolerate . In a fully asynchronous system there is no consensus solution that can tolerate one or more crash failures even when only requiring the non triviality property. This result is sometimes called the FLP impossibility proof named after the authors Michael J. Fischer, Nancy Lynch, and Mike Paterson who were awarded a Dijkstra Prize for this significant work. The FLP result has been mechanically verified to hold even under fairness assumptions. However, FLP does not state that consensus can never be reached: merely that under the model's assumptions, no algorithm can always reach consensus in bounded time. In practice it is highly unlikely to occur. Some consensus protocols The Paxos consensus algorithm by Leslie Lamport, and variants of it such as Raft, are used pervasively in widely deployed distributed and cloud computing systems. These algorithms are typically synchronous, dependent on an elected leader to make progress, and tolerate only crashes and not Byzantine failures. An example of a polynomial time binary consensus protocol that tolerates Byzantine failures is the Phase King algorithm by Garay and Berman. The algorithm solves consensus in a synchronous message passing model with n processes and up to f failures, provided n > 4f. In the phase king algorithm, there are f + 1 phases, with 2 rounds per phase. Each process keeps track of its preferred output (initially equal to the process's own input value). In the first round of each phase each process broadcasts its own preferred value to all other processes. It then receives the values from all processes and determines which value is the majority value and its count. In the second round of the phase, the process whose id matches the current phase number is designated the king of the phase. The king broadcasts the majority value it observed in the first round and serves as a tie breaker. Each process then updates its preferred value as follows. If the count of the majority value the process observed in the first round is greater than n/2 + f, the process changes its preference to that majority value; otherwise it uses the phase king's value. At the end of f + 1 phases the processes output their preferred values. Google has implemented a distributed lock service library called Chubby. Chubby maintains lock information in small files which are stored in a replicated database to achieve high availability in the face of failures. The database is implemented on top of a fault-tolerant log layer which is based on the Paxos consensus algorithm. In this scheme, Chubby clients communicate with the Paxos master in order to access/update the replicated log; i.e., read/write to the files. Many peer-to-peer online real-time strategy games use a modified lockstep protocol as a consensus protocol in order to manage game state between players in a game. Each game action results in a game state delta broadcast to all other players in the game along with a hash of the total game state. Each player validates the change by applying the delta to their own game state and comparing the game state hashes. If the hashes do not agree then a vote is cast, and those players whose game state is in the minority are disconnected and removed from the game (known as a desync.) Another well-known approach is called MSR-type algorithms which have been used widely from computer science to control theory. Permissionless consensus protocols Bitcoin uses proof of work, a difficulty adjustment function and a reorganization function to achieve permissionless consensus in its open peer-to-peer network. To extend bitcoin's blockchain or distributed ledger, miners attempt to solve a cryptographic puzzle, where probability of finding a solution is proportional to the computational effort expended in hashes per second. The node that first solves such a puzzle has their proposed version of the next block of transactions added to the ledger and eventually accepted by all other nodes. As any node in the network can attempt to solve the proof-of-work problem, a Sybil attack is infeasible in principle unless the attacker has over 50% of the computational resources of the network. Other cryptocurrencies (e.g. Ethereum, NEO, STRATIS, ...) use proof of stake, in which nodes compete to append blocks and earn associated rewards in proportion to stake, or existing cryptocurrency allocated and locked or staked for some time period. One advantage of a 'proof of stake' over a 'proof of work' system, is the high energy consumption demanded by the latter. As an example, bitcoin mining (2018) is estimated to consume non-renewable energy sources at an amount similar to the entire nations of Czech Republic or Jordan, while the total energy consumption of Ethereum, the largest proof of stake network, is just under that of 205 average US households. Some cryptocurrencies, such as Ripple, use a system of validating nodes to validate the ledger. This system used by Ripple, called Ripple Protocol Consensus Algorithm (RPCA), works in rounds: Step 1: every server compiles a list of valid candidate transactions; Step 2: each server amalgamates all candidates coming from its Unique Nodes List (UNL) and votes on their veracity; Step 3: transactions passing the minimum threshold are passed to the next round; Step 4: the final round requires 80% agreement. Other participation rules used in permissionless consensus protocols to impose barriers to entry and resist sybil attacks include proof of authority, proof of space, proof of burn, or proof of elapsed time. Contrasting with the above permissionless participation rules, all of which reward participants in proportion to amount of investment in some action or resource, proof of personhood protocols aim to give each real human participant exactly one unit of voting power in permissionless consensus, regardless of economic investment. Proposed approaches to achieving one-per-person distribution of consensus power for proof of personhood include physical pseudonym parties, social networks, pseudonymized government-issued identities, and biometrics. Consensus number To solve the consensus problem in a shared-memory system, concurrent objects must be introduced. A concurrent object, or shared object, is a data structure which helps concurrent processes communicate to reach an agreement. Traditional implementations using critical sections face the risk of crashing if some process dies inside the critical section or sleeps for an intolerably long time. Researchers defined wait-freedom as the guarantee that the algorithm completes in a finite number of steps. The consensus number of a concurrent object is defined to be the maximum number of processes in the system which can reach consensus by the given object in a wait-free implementation. Objects with a consensus number of can implement any object with a consensus number of or lower, but cannot implement any objects with a higher consensus number. The consensus numbers form what is called Herlihy's hierarchy of synchronization objects. According to the hierarchy, read/write registers cannot solve consensus even in a 2-process system. Data structures like stacks and queues can only solve consensus between two processes. However, some concurrent objects are universal (notated in the table with ), which means they can solve consensus among any number of processes and they can simulate any other objects through an operation sequence. See also Uniform consensus Quantum Byzantine agreement Byzantine fault References Further reading Bashir, Imran. "Blockchain Consensus." Blockchain Consensus - An Introduction to Classical, Blockchain, and Quantum Consensus Protocols. Apress, Berkeley, CA, 2022. Distributed computing problems Fault-tolerant computer systems
Consensus (computer science)
[ "Mathematics", "Technology", "Engineering" ]
3,915
[ "Distributed computing problems", "Reliability engineering", "Computational problems", "Computer systems", "Fault-tolerant computer systems", "Mathematical problems" ]
5,406,477
https://en.wikipedia.org/wiki/Interkinesis
Interkinesis or interphase II is a period of rest that cells of some species enter during meiosis between meiosis I and meiosis II. No DNA replication occurs during interkinesis; however, replication does occur during the interphase I stage of meiosis (See meiosis I). During interkinesis, the spindles of the first meiotic division disassembles and the microtubules reassemble into two new spindles for the second meiotic division. Interkinesis follows telophase I; however, many plants skip telophase I and interkinesis, going immediately into prophase II. Each chromosome still consists of two chromatids. In this stage other organelle number may also increase. References Cellular processes
Interkinesis
[ "Biology" ]
161
[ "Cellular processes" ]
5,407,025
https://en.wikipedia.org/wiki/Sum%20of%20angles%20of%20a%20triangle
In a Euclidean space, the sum of angles of a triangle equals a straight angle (180 degrees, radians, two right angles, or a half-turn). A triangle has three angles, one at each vertex, bounded by a pair of adjacent sides. The sum can be computed directly using the definition of angle based on the dot product and trigonometric identities, or more quickly by reducing to the two-dimensional case and using Euler's identity. It was unknown for a long time whether other geometries exist, for which this sum is different. The influence of this problem on mathematics was particularly strong during the 19th century. Ultimately, the answer was proven to be positive: in other spaces (geometries) this sum can be greater or lesser, but it then must depend on the triangle. Its difference from 180° is a case of angular defect and serves as an important distinction for geometric systems. Cases Euclidean geometry In Euclidean geometry, the triangle postulate states that the sum of the angles of a triangle is two right angles. This postulate is equivalent to the parallel postulate. In the presence of the other axioms of Euclidean geometry, the following statements are equivalent: Triangle postulate: The sum of the angles of a triangle is two right angles. Playfair's axiom: Given a straight line and a point not on the line, exactly one straight line may be drawn through the point parallel to the given line. Proclus' axiom: If a line intersects one of two parallel lines, it must intersect the other also. Equidistance postulate: Parallel lines are everywhere equidistant (i.e. the distance from each point on one line to the other line is always the same.) Triangle area property: The area of a triangle can be as large as we please. Three points property: Three points either lie on a line or lie on a circle. Pythagoras' theorem: In a right-angled triangle, the square of the hypotenuse equals the sum of the squares of the other two sides. An easy formula for these properties is that in any three points in any shape, there is a triangle formed. Triangle ABC (example) has 3 points, and therefore, three angles; angle A, angle B, and angle C. Angle A, B, and C will always, when put together, will form 360 degrees. So, ∠A + ∠B + ∠C = 360° Spherical geometry Spherical geometry does not satisfy several of Euclid's axioms, including the parallel postulate. In addition, the sum of angles is not 180° anymore. For a spherical triangle, the sum of the angles is greater than 180° and can be up to 540°. The amount by which the sum of the angles exceeds 180° is called the spherical excess, denoted as or . The spherical excess and the area of the triangle determine each other via the relation (called Girard's theorem):where is the radius of the sphere, equal to where is the constant curvature. The spherical excess can also be calculated from the three side lengths, the lengths of two sides and their angle, or the length of one side and the two adjacent angles (see spherical trigonometry). In the limit where the three side lengths tend to , the spherical excess also tends to : the spherical geometry locally resembles the euclidean one. More generally, the euclidean law is recovered as a limit when the area tends to (which does not imply that the side lengths do so). A spherical triangle is determined up to isometry by , one side length and one adjacent angle. More precisely, according to Lexell's theorem, given a spherical segment as a fixed side and a number , the set of points such that the triangle has spherical excess is a circle through the antipodes of and . Hence, the level sets of form a foliation of the sphere with two singularities , and the gradient vector of is orthogonal to this foliation. Hyperbolic geometry Hyperbolic geometry breaks Playfair's axiom, Proclus' axiom (the parallelism, defined as non-intersection, is intransitive in an hyperbolic plane), the equidistance postulate (the points on one side of, and equidistant from, a given line do not form a line), and Pythagoras' theorem. A circle cannot have arbitrarily small curvature, so the three points property also fails. The sum of angles is not 180° anymore, either. Contrarily to the spherical case, the sum of the angles of a hyperbolic triangle is less than 180°, and can be arbitrarily close to 0°. Thus one has an angular defectAs in the spherical case, the angular defect and the area determine each other: one haswhere and is the constant curvature. This relation was first proven by Johann Heinrich Lambert. One sees that all triangles have area bounded by . As in the spherical case, can be calculated using the three side lengths, the lengths of two sides and their angle, or the length of one side and the two adjacent angles (see hyperbolic trigonometry). Once again, the euclidean law is recovered as a limit when the side lengths (or, more generally, the area) tend to . Letting the lengths all tend to infinity, however, causes to tend to 180°, i.e. the three angles tend to 0°. One can regard this limit as the case of ideal triangles, joining three points at infinity by three bi-infinite geodesics. Their area is the limit value . Lexell's theorem also has a hyperbolic counterpart: instead of circles, the level sets become pairs of curves called hypercycles, and the foliation is non-singular. Exterior angles Angles between adjacent sides of a triangle are referred to as interior angles in Euclidean and other geometries. Exterior angles can be also defined, and the Euclidean triangle postulate can be formulated as the exterior angle theorem. One can also consider the sum of all three exterior angles, that equals to 360° in the Euclidean case (as for any convex polygon), is less than 360° in the spherical case, and is greater than 360° in the hyperbolic case. In differential geometry In the differential geometry of surfaces, the question of a triangle's angular defect is understood as a special case of the Gauss-Bonnet theorem where the curvature of a closed curve is not a function, but a measure with the support in exactly three points – vertices of a triangle. See also Euclid's Elements Foundations of geometry Hilbert's axioms Saccheri quadrilateral (considered earlier than Saccheri by Omar Khayyám) Lambert quadrilateral References Geometry Triangle geometry
Sum of angles of a triangle
[ "Mathematics" ]
1,399
[ "Geometry" ]
5,407,093
https://en.wikipedia.org/wiki/State%20machine%20replication
In computer science, state machine replication (SMR) or state machine approach is a general method for implementing a fault-tolerant service by replicating servers and coordinating client interactions with server replicas. The approach also provides a framework for understanding and designing replication management protocols. Problem definition Distributed service In terms of clients and services, each service comprises one or more servers and exports operations that clients invoke by making requests. Although using a single, centralized server is the simplest way to implement a service, the resulting service can only be as fault tolerant as the processor executing that server. If this level of fault tolerance is unacceptable, then multiple servers that fail independently can be used. Usually, replicas of a single server are executed on separate processors of a distributed system, and protocols are used to coordinate client interactions with these replicas. State machine For the subsequent discussion a State Machine will be defined as the following tuple of values (See also Mealy machine and Moore Machine): A set of States A set of Inputs A set of Outputs A transition function (Input × State → State) An output function (Input × State → Output) A distinguished State called Start. A State Machine begins at the State labeled Start. Each Input received is passed through the transition and output function to produce a new State and an Output. The State is held stable until a new Input is received, while the Output is communicated to the appropriate receiver. This discussion requires a State Machine to be deterministic: multiple copies of the same State Machine begin in the Start state, and receiving the same Inputs in the same order will arrive at the same State having generated the same Outputs. Typically, systems based on State Machine Replication voluntarily restrict their implementations to use finite-state machines to simplify error recovery. Fault Tolerance Determinism is an ideal characteristic for providing fault-tolerance. Intuitively, if multiple copies of a system exist, a fault in one would be noticeable as a difference in the State or Output from the others. The minimum number of copies needed for fault-tolerance is three; one which has a fault, and two others to whom we compare State and Output. Two copies are not enough as there is no way to tell which copy is the faulty one. A three-copy system can support at most one failure (after which it must repair or replace the faulty copy). If more than one of the copies were to fail, all three States and Outputs might differ, and there would be no way to choose which is the correct one. In general, a system which supports F failures must have 2F+1 copies (also called replicas). The extra copies are used as evidence to decide which of the copies are correct and which are faulty. Special cases can improve these bounds. All of this deduction pre-supposes that replicas are experiencing only random independent faults such as memory errors or hard-drive crash. Failures caused by replicas which attempt to lie, deceive, or collude can also be handled by the State Machine Approach, with isolated changes. Failed replicas are not required to stop; they may continue operating, including generating spurious or incorrect Outputs. Special Case: Fail-Stop Theoretically, if a failed replica is guaranteed to stop without generating outputs, only F+1 replicas are required, and clients may accept the first output generated by the system. No existing systems achieve this limit, but it is often used when analyzing systems built on top of a fault-tolerant layer (Since the fault-tolerant layer provides fail-stop semantics to all layers above it). Special Case: Byzantine Failure Faults where a replica sends different values in different directions (for instance, the correct Output to some of its fellow replicas and incorrect Outputs to others) are called Byzantine Failures. Byzantine failures may be random, spurious faults, or malicious, intelligent attacks. 2F+1 replicas, with non-cryptographic hashes suffices to survive all non-malicious Byzantine failures (with high probability). Malicious attacks require cryptographic primitives to achieve 2F+1 (using message signatures), or non-cryptographic techniques can be applied but the number of replicas must be increased to 3F+1. The State Machine Approach The preceding intuitive discussion implies simple technique for implementing a fault-tolerant service in terms of a State Machine: Place copies of the State Machine on multiple, independent servers. Receive client requests, interpreted as Inputs to the State Machine. Choose an ordering for the Inputs. Execute Inputs in the chosen order on each server. Respond to clients with the Output from the State Machine. Monitor replicas for differences in State or Output. The remainder of this article develops the details of this technique. Step 1 and 2 are outside the scope of this article. Step 3 is the critical operation, see Ordering Inputs. Step 4 is covered by the State machine definition. Step 5, see Sending Outputs. Step 6, see Auditing and Failure Detection. The appendix contains discussion on typical extensions used in real-world systems such as Logging, Checkpoints, Reconfiguration, and State Transfer. Ordering Inputs The critical step in building a distributed system of State Machines is choosing an order for the Inputs to be processed. Since all non-faulty replicas will arrive at the same State and Output if given the same Inputs, it is imperative that the Inputs are submitted in an equivalent order at each replica. Many solutions have been proposed in the literature. A Visible Channel is a communication path between two entities actively participating in the system (such as clients and servers). Example: client to server, server to server A Hidden Channel is a communication path which is not revealed to the system. Example: client to client channels are usually hidden; such as users communicating over a telephone, or a process writing files to disk which are read by another process. When all communication paths are visible channels and no hidden channels exist, a partial global order (Causal Order) may be inferred from the pattern of communications. Causal Order may be derived independently by each server. Inputs to the State Machine may be executed in Causal Order, guaranteeing consistent State and Output for all non-faulty replicas. In open systems, hidden channels are common and a weaker form of ordering must be used. An order of Inputs may be defined using a voting protocol whose results depend only on the visible channels. The problem of voting for a single value by a group of independent entities is called Consensus. By extension, a series of values may be chosen by a series of consensus instances. This problem becomes difficult when the participants or their communication medium may experience failures. Inputs may be ordered by their position in the series of consensus instances (Consensus Order). Consensus Order may be derived independently by each server. Inputs to the State Machine may be executed in Consensus Order, guaranteeing consistent State and Output for all non-faulty replicas. Optimizing Causal & Consensus Ordering In some cases additional information is available (such as real-time clocks). In these cases, it is possible to achieve more efficient causal or consensus ordering for the Inputs, with a reduced number of messages, fewer message rounds, or smaller message sizes. See references for details Further optimizations are available when the semantics of State Machine operations are accounted for (such as Read vs Write operations). See references Generalized Paxos. Sending Outputs Client requests are interpreted as Inputs to the State Machine, and processed into Outputs in the appropriate order. Each replica will generate an Output independently. Non-faulty replicas will always produce the same Output. Before the client response can be sent, faulty Outputs must be filtered out. Typically, a majority of the Replicas will return the same Output, and this Output is sent as the response to the client. System Failure If there is no majority of replicas with the same Output, or if less than a majority of replicas returns an Output, a system failure has occurred. The client response must be the unique Output: FAIL. Auditing and Failure Detection The permanent, unplanned compromise of a replica is called a Failure. Proof of failure is difficult to obtain, as the replica may simply be slow to respond, or even lie about its status. Non-faulty replicas will always contain the same State and produce the same Outputs. This invariant enables failure detection by comparing States and Outputs of all replicas. Typically, a replica with State or Output which differs from the majority of replicas is declared faulty. A common implementation is to pass checksums of the current replica State and recent Outputs among servers. An Audit process at each server restarts the local replica if a deviation is detected. Cryptographic security is not required for checksums. It is possible that the local server is compromised, or that the Audit process is faulty, and the replica continues to operate incorrectly. This case is handled safely by the Output filter described previously (see Sending Outputs). Appendix: extensions Input log In a system with no failures, the Inputs may be discarded after being processed by the State Machine. Realistic deployments must compensate for transient non-failure behaviors of the system such as message loss, network partitions, and slow processors. One technique is to store the series of Inputs in a log. During times of transient behavior, replicas may request copies of a log entry from another replica in order to fill in missing Inputs. In general the log is not required to be persistent (it may be held in memory). A persistent log may compensate for extended transient periods, or support additional system features such as Checkpoints, and Reconfiguration. Checkpoints If left unchecked a log will grow until it exhausts all available storage resources. For continued operation, it is necessary to forget log entries. In general a log entry may be forgotten when its contents are no longer relevant (for instance if all replicas have processed an Input, the knowledge of the Input is no longer needed). A common technique to control log size is store a duplicate State (called a Checkpoint), then discard any log entries which contributed to the checkpoint. This saves space when the duplicated State is smaller than the size of the log. Checkpoints may be added to any State Machine by supporting an additional Input called CHECKPOINT. Each replica maintains a checkpoint in addition to the current State value. When the log grows large, a replica submits the CHECKPOINT command just like a client request. The system will ensure non-faulty replicas process this command in the same order, after which all log entries before the checkpoint may be discarded. In a system with checkpoints, requests for log entries occurring before the checkpoint are ignored. Replicas which cannot locate copies of a needed log entry are faulty and must re-join the system (see Reconfiguration). Reconfiguration Reconfiguration allows replicas to be added and removed from a system while client requests continue to be processed. Planned maintenance and replica failure are common examples of reconfiguration. Reconfiguration involves Quitting and Joining. Quitting When a server detects its State or Output is faulty (see Auditing and Failure Detection), it may selectively exit the system. Likewise, an administrator may manually execute a command to remove a replica for maintenance. A new Input is added to the State Machine called QUIT. A replica submits this command to the system just like a client request. All non-faulty replicas remove the quitting replica from the system upon processing this Input. During this time, the replica may ignore all protocol messages. If a majority of non-faulty replicas remain, the quit is successful. If not, there is a System Failure. Joining After quitting, a failed server may selectively restart or re-join the system. Likewise, an administrator may add a new replica to the group for additional capacity. A new Input is added to the State Machine called JOIN. A replica submits this command to the system just like a client request. All non-faulty replicas add the joining node to the system upon processing this Input. A new replica must be up-to-date on the system's State before joining (see State Transfer). State Transfer When a new replica is made available or an old replica is restarted, it must be brought up to the current State before processing Inputs (see Joining). Logically, this requires applying every Input from the dawn of the system in the appropriate order. Typical deployments short-circuit the logical flow by performing a State Transfer of the most recent Checkpoint (see Checkpoints). This involves directly copying the State of one replica to another using an out-of-band protocol. A checkpoint may be large, requiring an extended transfer period. During this time, new Inputs may be added to the log. If this occurs, the new replica must also receive the new Inputs and apply them after the checkpoint is received. Typical deployments add the new replica as an observer to the ordering protocol before beginning the state transfer, allowing the new replica to collect Inputs during this period. Optimizing State Transfer Common deployments reduce state transfer times by sending only State components which differ. This requires knowledge of the State Machine internals. Since state transfer is usually an out-of-band protocol, this assumption is not difficult to achieve. Compression is another feature commonly added to state transfer protocols, reducing the size of the total transfer. Leader Election (for Paxos) Paxos is a protocol for solving consensus, and may be used as the protocol for implementing Consensus Order. Paxos requires a single leader to ensure liveness. That is, one of the replicas must remain leader long enough to achieve consensus on the next operation of the state machine. System behavior is unaffected if the leader changes after every instance, or if the leader changes multiple times per instance. The only requirement is that one replica remains leader long enough to move the system forward. Conflict resolution In general, a leader is necessary only when there is disagreement about which operation to perform, and if those operations conflict in some way (for instance, if they do not commute). When conflicting operations are proposed, the leader acts as the single authority to set the record straight, defining an order for the operations, allowing the system to make progress. With Paxos, multiple replicas may believe they are leaders at the same time. This property makes Leader Election for Paxos very simple, and any algorithm which guarantees an 'eventual leader' will work. Historical background A number of researchers published articles on the replicated state machine approach in the early 1980s. Anita Borg described an implementation of a fault tolerant operating system based on replicated state machines in a 1983 paper "A message system supporting fault tolerance". Leslie Lamport also proposed the state machine approach, in his 1984 paper on "Using Time Instead of Timeout In Distributed Systems". Fred Schneider later elaborated the approach in his paper "Implementing Fault-Tolerant Services Using the State Machine Approach: A Tutorial". Ken Birman developed the virtual synchrony model in a series of papers published between 1985 and 1987. The primary reference to this work is "Exploiting Virtual Synchrony in Distributed Systems", which describes the Isis Toolkit, a system that was used to build the New York and Swiss Stock Exchanges, French Air Traffic Control System, US Navy AEGIS Warship, and other applications. Recent work by Miguel Castro and Barbara Liskov used the state machine approach in what they call a "Practical Byzantine fault tolerance" architecture that replicates especially sensitive services using a version of Lamport's original state machine approach, but with optimizations that substantially improve performance. Most recently, there has also been the creation of the BFT-SMaRt library, a high-performance Byzantine fault-tolerant state machine replication library developed in Java. This library implements a protocol very similar to PBFT's, plus complementary protocols which offer state transfer and on-the-fly reconfiguration of hosts (i.e., JOIN and LEAVE operations). BFT-SMaRt is the most recent effort to implement state machine replication, still being actively maintained. Raft, a consensus based algorithm, was developed in 2013. Motivated by PBFT, Tendermint BFT was introduced for partial asynchronous networks and it is mainly used for Proof of Stake blockchains. References External links Replicated state machines video on MIT TechTV Apache Bookkeeper a replicated log service which can be used to build replicated state machines Distributed computing problems Data synchronization Fault-tolerant computer systems
State machine replication
[ "Mathematics", "Technology", "Engineering" ]
3,358
[ "Distributed computing problems", "Reliability engineering", "Computational problems", "Computer systems", "Fault-tolerant computer systems", "Mathematical problems" ]
5,407,355
https://en.wikipedia.org/wiki/Beam%20dump
A beam dump, also known as a beam block, a beam stop, or a beam trap, is a device designed to absorb the energy of photons or other particles within an energetic beam. Types of beam dumps Beam blocks Beam blocks are simple optical elements that absorb a beam of light using a material with strong absorption and low reflectance. Materials commonly used for beam blocks include certain types of acrylic paint, carbon nanotubes, anodized aluminum, and nickel-phosphate coatings. Beam traps Beam traps are used when it is important that there is no reflectance. Beam traps can incorporate materials used for beam blocks in their design to further reduce the possibility of reflectance. Charged-particle beam dumps The purpose of a charged-particle beam dump is to safely absorb a beam of charged particles such as electrons, protons, nuclei, or ions. This is necessary when, for example, a circular particle accelerator has to be shut down. Dealing with the heat deposited can be an issue, since the energies of the beams to be absorbed can run into the megajoules. An example of a charged-particle beam dump is the one used by CERN for the Super Proton Synchrotron. Currently, the SPS uses a beam dump that consists of graphite, molybdenum, and tungsten surrounded by concrete, marble, and cast-iron shielding. References Accelerator physics Optical devices Laser applications
Beam dump
[ "Physics", "Materials_science", "Engineering" ]
290
[ "Glass engineering and science", "Applied and interdisciplinary physics", "Optical devices", "Experimental physics", "Accelerator physics" ]
5,407,398
https://en.wikipedia.org/wiki/C%20date%20and%20time%20functions
The C date and time functions are a group of functions in the standard library of the C programming language implementing date and time manipulation operations. They provide support for time acquisition, conversion between date formats, and formatted output to strings. History The format string used in strftime traces back to at least PWB/UNIX 1.0, released in 1977. Its date system command includes various formatting options. In 1989, the ANSI C standard is released including strftime and other date and time functions. Overview of functions The C date and time operations are defined in the time.h header file (ctime header in C++). The and related types were originally proposed by Markus Kuhn to provide a variety of time bases, but only was accepted. The functionalities were, however, added to C++ in 2020 in std::chrono. Example The following C source code prints the current time to the standard output stream. #include <time.h> #include <stdlib.h> #include <stdio.h> int main(void) { time_t current_time; char* c_time_string; /* Obtain current time. */ current_time = time(NULL); if (current_time == ((time_t)-1)) { (void) fprintf(stderr, "Failure to obtain the current time.\n"); exit(EXIT_FAILURE); } /* Convert to local time format. */ c_time_string = ctime(&current_time); if (c_time_string == NULL) { (void) fprintf(stderr, "Failure to convert the current time.\n"); exit(EXIT_FAILURE); } /* Print to stdout. ctime() has already added a terminating newline character. */ (void) printf("Current time is %s", c_time_string); exit(EXIT_SUCCESS); } The output is: Current time is Thu Sep 15 21:18:23 2016 See also Unix time Year 2038 problem References External links C standard library Time
C date and time functions
[ "Physics", "Mathematics" ]
462
[ "Physical quantities", "Time", "Quantity", "Spacetime", "Wikipedia categories named after physical quantities" ]
5,407,502
https://en.wikipedia.org/wiki/Rosa%20%C3%97%20damascena
Rosa × damascena (Latin for damascene rose), more commonly known as the Damask rose, or sometimes as the Iranian Rose, Bulgarian rose, Taif rose & "Emirati rose", Ispahan rose, Castile rose, and Đulbešećerka (Bosnia and Herzegovina and the Balkans) is a rose hybrid, derived from Rosa gallica and Rosa moschata. DNA analysis has shown that a third species, Rosa fedtschenkoana, has made some genetic contributions to the Damask rose. The flowers are renowned for their fine fragrance, and are commercially harvested for rose oil (either "rose otto" or "rose absolute") used in perfumery and to make rose water and "rose concrete". The flower petals are also edible. They may be used to flavor food, as a garnish, as an herbal tea, and preserved in sugar as gulkand. It is the national flower of Iran. In 2019, Damascus rose was inscribed to the UNESCO Intangible Cultural Heritage Lists as an element of Syrian cultural heritage. Description The Damask rose is a deciduous shrub growing to tall, the stems densely armed with stout, curved prickles and stiff bristles. The leaves are pinnate, with five (rarely seven) leaflets. The roses are a light to moderate pink to light red. The relatively small flowers grow in groups. The bush has an informal shape. It is considered an important type of Old Rose, and also important for its prominent place in the pedigree of many other types. Varieties The hybrid is divided in two varieties: Summer Damasks (R. × damascena nothovar. damascena) have a short flowering season, only in the summer. Autumn Damasks (R. × damascena nothovar. semperflorens (Duhamel) Rowley) have a longer flowering season, extending into the autumn; they are otherwise not distinguishable from the summer damasks. The hybrid Rosa × centifolia is derived in part from Rosa × damascena, as are Bourbon, Portland and hybrid perpetual roses. The cultivar known as Rosa gallica forma trigintipetala or Rosa damascena 'Trigintipetala' is considered to be a synonym of Rosa × damascena. 'Celsiana' is a flowering semi-double variety. History Rosa × damascena is a cultivated flower that is not found growing wild. Recent genetic tests indicate that it is a hybrid of R. moschata x R. gallica crossed with the pollen of Rosa fedtschenkoana, which indicates a probable origin in the foothills of central Asia or Iran. The French Crusader Robert de Brie, who took part in the Siege of Damascus in 1148 at the Second Crusade, is sometimes credited for bringing the Damask rose from Syria to Europe. The name of the rose refers to the city of Damascus in Syria, known for its steel (Damask steel), fabrics (Damask) and roses. Other accounts state that the ancient Romans brought it to their colonies in England, and a third account is that the physician of King Henry VIII, named as Thomas Linacre, gifted him one circa 1540. Although this latter claim is of dubious veracity as Linacre died in 1524, 16 years before the introduction of the rose to the royal garden took place. There is a history of fragrance production in Kabul Province of Afghanistan from the Damask rose. An attempt has been made to restore this industry as an alternative for farmers who produce opium. The flower, known in Hawaiian as Lokelani, is the official flower of the Island of Maui. Nirad Chaudhuri, the Bengali writer, recalls that Hindus in East Bengal did not cultivate it because it was "looked upon as an Islamic flower". Cultivation Rosa × damascena is optimally cultivated in hedge rows to help protect the blooms from wind damage and to facilitate harvesting them. In Bulgaria, damask roses are grown in long hedges, while in Turkey, individual plants are spaced apart along trenches. Gathering the flowers is intense manual labor. The harvesting period for roses is dependent on weather conditions and locations: between as long as a month in cooler conditions, or as short as 16-20 days in hotter seasons. Rose oil Iran, Bulgaria and Turkey are the major producers of rose oil from the different cultivars of Rosa × damascena. France and India also contribute significantly to the world market. The cultivation of the "Bulgarian rose" as Rosa × damascena has been developed since Roman times. It is cultivated for commercial use in an area in the vicinity of Kazanlak and Karlovo in Bulgaria called the "Valley of Roses". The distillate from these roses is called "Bulgarian rose oil" and "Bulgarian rose otto". While families still operate their own small distilleries and produce what is denominated "village oil", the commercialization of rose oil as a high quality product is carefully regulated by a state cooperative in the Isparta region of Turkey. The roses are still grown by the small family farms but the flowers are brought to stills established and regulated by the cooperative for distillation and quality control. Culinary uses Damask roses are used in cooking as a flavouring ingredient or spice. They are an ingredient in the spice mixture denominated ras el hanout. Rose water and powdered roses are used in Middle Eastern and Indian cuisine. Rose water is often sprinkled on meat dishes, while rose powder is added to sauces. Chicken with rose is a popular dish in Middle Eastern cuisine. Whole flowers, or petals, are also used in the herbal tea zuhurat. The most popular use, however, is in the flavoring of desserts such as ice cream, jam, Turkish delights, rice pudding, yogurt, etc. For centuries, the Damask rose has symbolized beauty and love. The fragrance of the rose has been captured and preserved in the form of rose water by a method that can be traced to ancient times in the Middle East and later to the Indian subcontinent. Modern Western cookery does not use roses or rose water much. However, it was a popular ingredient in ancient times and continued to be popular well into the Renaissance. It was most commonly used in desserts, and still is a flavour in traditional desserts such as marzipan or turrón. It has seen some revival in television cooking in the twenty-first century. See also Rose Oil Miracle of the roses References External links Gernot Katzer's Spice Dictionary - Damask Rose Rosa harvesting in Meimand; Photos. Roses of Constantinople - Damask Rose Kazanlik Herbs Spices Medicinal plants Flora of Pakistan Hybrid plants damascena Taxa named by Philip Miller
Rosa × damascena
[ "Biology" ]
1,396
[ "Hybrid plants", "Plants", "Hybrid organisms" ]
5,407,575
https://en.wikipedia.org/wiki/Cryptographic%20log%20on
Cryptographic log-on (CLO) is a process that uses Common Access Cards (CAC) and embedded Public Key Infrastructure (PKI) certificates to authenticate a user's identification to a workstation and network. It replaces the username and passwords for identifying and authenticating users. To log-on cryptographically to a CLO-enabled workstation, users simply insert their CAC into their workstation’s CAC reader and provide their Personal Identification Number (PIN). The Navy/Marine Corps Intranet, among many other secure networks, uses CLO. References Computer access control
Cryptographic log on
[ "Technology", "Engineering" ]
126
[ "Computer security stubs", "Computing stubs", "Cybersecurity engineering", "Computer access control" ]
5,407,581
https://en.wikipedia.org/wiki/Mucous%20membrane%20of%20the%20soft%20palate
The mucous membrane of the soft palate is thin, and covered with stratified squamous epithelium on both surfaces, except near the pharyngeal ostium of the auditory tube, where it is columnar and ciliated. According to Klein, the mucous membrane on the nasal surface of the soft palate in the fetus is covered throughout by columnar ciliated epithelium, which subsequently becomes squamous; some anatomists state that it is covered with columnar ciliated epithelium, except at its free margin, throughout life. Beneath the mucous membrane on the oral surface of the soft palate is a considerable amount of adenoid tissue. The palatine glands form a continuous layer on its posterior surface and around the uvula. They are primarily mucus-secreting glands, as opposed to serous or mixed secreting glands. References Membrane biology
Mucous membrane of the soft palate
[ "Chemistry" ]
190
[ "Membrane biology", "Molecular biology" ]
2,936,393
https://en.wikipedia.org/wiki/Contact%20explosive
A contact explosive is a chemical substance that explodes violently when it is exposed to a relatively small amount of energy (e.g. friction, pressure, sound, light). Though different contact explosives have varying amounts of energy sensitivity, they are all much more sensitive relative to other kinds of explosives. Contact explosives are a part of a group of explosives called primary explosives, which are also very sensitive to stimuli but not to the degree of contact explosives. The extreme sensitivity of contact explosives is due to either chemical composition, bond type, or structure. Types These are some common contact explosives. Reasons for instability Composition Presence of nitrogen Explosives that are nitrogen-based are incredibly volatile due to the stability of nitrogen in its diatomic state, N2. Most organic explosives are explosive because they contain nitrogen. They are defined as nitro compounds. Nitro compounds are explosive because although the diatomic form of nitrogen is very stable—that is, the triple bond that holds N2 together is very strong, and therefore has a great deal of bond energy—the nitro compounds themselves are unstable, as the bonds between nitrogen atoms and other atoms in nitro compounds are weak by comparison. Therefore, little energy is required to overcome these weak bonds, but a great deal of energy is released in the exothermic process in which the strong triple bonds in N2 are formed. The rapidity of the reaction, due to the weakness of the bonds in nitro compounds, and the high quantity of overall energy released, due to the much higher strength of the triple bonds, produce the explosive qualities of these compounds. Oxidizer and fuel Some contact explosives contain an oxidizer and a fuel in their composition. Chemicals like gasoline, a fuel, burn instead of explode because they must come into contact with oxygen in the combustion reaction. However, if the compound already contains both the oxidant and fuel, it produces a much faster and violent reaction. Bonds and structure The structures and bonds that make up a contact explosive contribute to its instability. Covalent compounds that have a large unequal sharing of electrons have the capability to fall apart very easily and explosively. Nitrogen triiodide is a perfect example of this property. The three huge iodine atoms try to attach themselves to one small nitrogen ion, which means that the atoms are holding on to each other through a very weak bond. The weak bond between each atom is like a thread just waiting to break. Therefore, any small amount of applied energy cuts this thread and releases the iodine and nitrogen atoms to react with the fuel, allowing the reaction to occur quickly and release a large amount of energy. The shape of the contact explosive molecule plays a role in its instability as well. Using nitrogen triiodide as an example again, its pyramidal shape forces the three iodine atoms to be incredibly close to each other. The shape further strains the already weak bonds that holds together this molecule. Uses Contact explosives are used in a variety of fields. Military Militaries use a variety of contact explosives in combat. Some can be manufactured into different types of bombs, tactical grenades, and even explosive bullets. Dry picric acid, which is more powerful than TNT, was used in blasting charges and artillery shells. A lot of contact explosives are used in detonators. For explosives that use secondary explosives, contact explosives are used in the detonators to set off an energy chain reaction that will eventually set off the secondary explosive. Compounds like lead azide are used to manufacture bullets that explode into shrapnel on impact. Flash powders are used in a variety of military and police tactical pyrotechnics. Stun grenades, flash bangs, and flares all use flash powder to create bright, flashing lights and loud noise that disorients the enemy. On the other hand, many of these cheap, volatile contact explosives are also used in improvised explosive devices (IEDs) created by terrorists and suicide bombers. For example, acetone peroxide passes through explosive detectors and is incredibly powerful, unstable, and deadly. Evidence for the instability of these IEDs lies in the multiple reports of premature or wrongful IED explosions. However, when these explosives are used correctly, they have devastating consequences. The July 7, 2005, London bombings, the 2015 Paris attacks, and the 2016 Brussels bombings all used explosives that contained acetone peroxide. Medicine Angina pectoris, a symptom of Ischaemic heart disease, is treated with nitroglycerin. Nitroglycerin is known as a vasodilator. Vasodilators work by relaxing the heart's blood vessels so the heart does not need to work as hard. Picric acid specifically has been used for burn treatment and as an Antiseptic. Theatrical/fireworks The same flash powder used for military tactical pyrotechnics can also be used for several theatrical special effects. They are used to produce loud, bright flashes of light for effect. Though some flash powders are too volatile and dangerous to be safely used, there are milder compounds that are still incorporated into performances today. Silver Fulminate is used to make noise-makers, small contact poppers, and several other novelty fireworks. It is most widely used in bang snaps. In these small explosives, a minuscule amount of silver fulminate is encased in gravel and cigarette paper. Even with this small amount of silver fulminate, it produces a loud, sharp bang. See also Shock sensitivity References External links List of shock-sensitive materials Explosives
Contact explosive
[ "Chemistry" ]
1,129
[ "Explosives", "Explosions" ]
2,936,453
https://en.wikipedia.org/wiki/Polyisocyanurate
Polyisocyanurate (), also referred to as PIR, polyol, or ISO, is a thermoset plastic typically produced as a foam and used as rigid thermal insulation. The starting materials are similar to those used in polyurethane (PUR) except that the proportion of methylene diphenyl diisocyanate (MDI) is higher and a polyester-derived polyol is used in the reaction instead of a polyether polyol. The resulting chemical structure is significantly different, with the isocyanate groups on the MDI trimerising to form isocyanurate groups which the polyols link together, giving a complex polymeric structure. Manufacturing The reaction of (MDI) and polyol takes place at higher temperatures compared with the reaction temperature for the manufacture of PUR. At these elevated temperatures and in the presence of specific catalysts, MDI will first react with itself, producing a stiff, ring molecule, which is a reactive intermediate (a tri-isocyanate isocyanurate compound). Remaining MDI and the tri-isocyanate react with polyol to form a complex poly(urethane-isocyanurate) polymer (hence the use of the abbreviation PUI as an alternative to PIR), which is foamed in the presence of a suitable blowing agent. This isocyanurate polymer has a relatively strong molecular structure, because of the combination of strong chemical bonds, the ring structure of isocyanurate and high cross link density, each contributing to the greater stiffness than found in comparable polyurethanes. The greater bond strength also means these are more difficult to break, and as a result a PIR foam is chemically and thermally more stable: breakdown of isocyanurate bonds is reported to start above 200 °C, compared with urethane at 100 to 110 °C. PIR typically has an MDI/polyol ratio, also called its index (based on isocyanate/polyol stoichiometry to produce urethane alone), higher than 180. By comparison PUR indices are normally around 100. As the index increases material stiffness the brittleness also increases, although the correlation is not linear. Depending on the product application greater stiffness, chemical and/or thermal stability may be desirable. As such PIR manufacturers can offer multiple products with identical densities but different indices in an attempt to achieve optimal end use performance. Uses PIR is typically produced as a foam and used as rigid thermal insulation. Its thermal conductivity has a typical value of 0.023 W/(m·K) (0.16 BTU·in/(hr·ft2·°F)) depending on the perimeter:area ratio. PIR foam panels laminated with pure embossed aluminium foil are used for fabrication of pre-insulated duct that is used for heating, ventilation and air conditioning systems. Prefabricated PIR sandwich panels are manufactured with corrosion-protected, corrugated steel facings bonded to a core of PIR foam and used extensively as roofing insulation and vertical walls (e.g. for warehousing, factories, office buildings etc.). Other typical uses for PIR foams include industrial and commercial pipe insulation, and carving/machining media (competing with expanded polystyrene and rigid polyurethane foams). Effectiveness of the insulation of a building envelope can be compromised by gaps resulting from shrinkage of individual panels. Manufacturing criteria require that shrinkage be limited to less than 1% (previously 2%). Even when shrinkage is limited to substantially less than this limit, the resulting gaps around the perimeter of each panel can reduce insulation effectiveness, especially if the panels are assumed to provide a vapor/infiltration barrier. Multiple layers with staggered joints, ship lapped or tongue & groove joints greatly reduce these problems. Polyisocyanurates of isophorone diisocyanate are also used in the preparation of polyurethane coatings based on acrylic polyols and polyether polyols. Health hazards PIR insulation can be a mechanical irritant to skin, eyes, and upper respiratory system during fabrication (such as dust). No statistically significant increased risks of respiratory diseases have been found in studies. Fire risk PIR is at times stated to be fire retardant, or contain fire retardants, but these describe the results of "small scale tests" and "do not reflect [all] hazards under real fire conditions"; the extent of hazards from fire include not just resistance to fire but the scope for toxic byproducts from different fire scenarios. A 2011 study of fire toxicity of insulating materials at the University of Central Lancashire's Centre for Fire and Hazard Science studied PIR and other commonly used materials under more realistic and wide-ranging conditions representative of a wider range of fire hazard, observing that most fire deaths resulted from toxic product inhalation. The study evaluated the degree to which toxic products were released, looking at toxicity, time-release profiles, and lethality of doses released, in a range of flaming, non-flaming, and poorly ventilated fires, and concluded that PIR generally released a considerably higher level of toxic products than the other insulating materials studied (PIR > PUR > EPS > PHF; glass and stone wools also studied). In particular, hydrogen cyanide is recognised as a significant contributor to the fire toxicity of PIR (and PUR) foams. PIR insulation board (cited as the FR4000 and the FR5000 products of Celotex, a Saint-Gobain company) was proposed to be used externally in the refurbishment of Grenfell Tower, London, with vertical and horizontal runs of 100 mm and 150 mm thickness respectively; subsequently "Ipswich firm Celotex confirmed it provided insulation materials for the refurbishment." On 14 June 2017 the block of flats, within 15 minutes, was enveloped in flames from the fourth floor to the top 24th floor. The public inquiry into the fire determined that the Celotex cladding material was one of the primary causes of the rapid spread of the fire, as they were much more flammable than permitted by building regulations. Celotex deceived regulators about the fire performance of the cladding by secretly adding fire retardant materials to the cladding panels that were used during safety testing. References External links Polyisocyanurate Insulation Manufacturers Association Polyisocyanurate Insulation energy savings, by Center for the Polyurethanes Industry Continuous Insulation Resources for several types of rigid foam continuous insulation Plastics Polyurethanes Building insulation materials Thermosetting plastics
Polyisocyanurate
[ "Physics" ]
1,373
[ "Amorphous solids", "Unsolved problems in physics", "Plastics" ]
2,936,643
https://en.wikipedia.org/wiki/Moon%20dog
A moon dog (or moondog) or mock moon, also called a paraselene (plural paraselenae) in meteorology, is an atmospheric optical phenomenon that consists of a bright spot to one or both sides of the Moon. They are exactly analogous to sun dogs. A member of the halo family, moon dogs are caused by the refraction of moonlight by hexagonal-plate-shaped ice crystals in cirrus clouds or cirrostratus clouds. They typically appear as a pair of faint patches of light, at around 22° to the left and right of the Moon, and at the same altitude above the horizon as the Moon. They may also appear alongside 22° halos. Moon dogs are rarer than sun dogs because the Moon must be bright, about quarter moon or more, for the moon dogs to be observed. Moon dogs show little color to the unaided human eye because their light is not bright enough to activate the eye's cone cells. See also Halo (optical phenomenon) Circumhorizontal arc Circumzenithal arc Gegenschein Zodiacal light References Atmospheric optical phenomena
Moon dog
[ "Physics" ]
234
[ "Optical phenomena", "Physical phenomena", "Atmospheric optical phenomena", "Earth phenomena" ]
2,936,762
https://en.wikipedia.org/wiki/Claris%20Resolve
Claris Resolve was a spreadsheet computer program for the Apple Macintosh. It was released by Claris in 1991 and sold until 1994. In an effort to flesh out their software suite, in the early 1990s Claris wanted to introduce a spreadsheet application, and decided to buy an existing one. This was not particularly difficult, as Informix had essentially abandoned the Mac version of WingZ, and Claris was able to purchase the non-exclusive rights to the codebase. After changing the interface to conform to their new "Pro" line of product's GUI, they released it at the MacWorld Expo Boston on June 6, 1991, as Resolve. Resolve supports a worksheet size of more than one billion cells and includes 149 built-in functions that allow users to create financial, statistical and mathematical models. Resolve also contains object-oriented, MacDraw-like drawing tools for combining illustrations, clip art, text, charts and numbers in reports. Resolve also included WingZ scripting language, renamed Resolve Script. Resolve also offered 3D charting that Wingz was the first to bring on Macintosh. Resolve failed to gain significant market share due to Microsoft Excel, which also stopped Lotus 1-2-3 becoming popular on the Macintosh. This led to disappointing sales, and in 1993 development was stopped. On 31 March 1994 Claris stopped selling Resolve, the program was supported up until 31 March 1995. Claris suggested existing Resolve users to upgrade to the spreadsheet module of ClarisWorks. Reception Resolve 1.0v3 got mice (out of 5) in June 1992 issue of MacUser, praising the familiar interface and the scripting. References External links TidBITS#76/12-Aug-91 Claris Resolve introduction information TidBITS#216/07-Mar-94 Claris Resolve discontinuation info Claris Resolve Information and Screenshots on knubbelmac.de Macintosh-only software Spreadsheet software
Claris Resolve
[ "Mathematics" ]
397
[ "Spreadsheet software", "Mathematical software" ]
2,936,835
https://en.wikipedia.org/wiki/Trusted%20Platform%20Module
A Trusted Platform Module (TPM) is a secure cryptoprocessor that implements the ISO/IEC 11889 standard. Common uses are verifying that the boot process starts from a trusted combination of hardware and software and storing disk encryption keys. A TPM 2.0 implementation is part of the Windows 11 system requirements. History The first TPM version that was deployed was 1.1b in 2003. Trusted Platform Module (TPM) was conceived by a computer industry consortium called Trusted Computing Group (TCG). It evolved into TPM Main Specification Version 1.2 which was standardized by International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) in 2009 as ISO/IEC 11889:2009. TPM Main Specification Version 1.2 was finalized on 3 March 2011 completing its revision. On 9 April 2014 the Trusted Computing Group announced a major upgrade to their specification entitled TPM Library Specification 2.0. The group continues work on the standard incorporating errata, algorithmic additions and new commands, with its most recent edition published as 2.0 in November 2019. This version became ISO/IEC 11889:2015. When a new revision is released it is divided into multiple parts by the Trusted Computing Group. Each part consists of a document that makes up the whole of the new TPM specification. Part 1 Architecture (renamed from Design Principles) Part 2 Structures of the TPM Part 3 Commands Part 4 Supporting Routines (added in TPM 2.0) Version differences While TPM 2.0 addresses many of the same use cases and has similar features, the details are different. TPM 2.0 is not backward compatible with TPM 1.2. The TPM 2.0 policy authorization includes the 1.2 HMAC, locality, physical presence, and PCR. It adds authorization based on an asymmetric digital signature, indirection to another authorization secret, counters and time limits, NVRAM values, a particular command or command parameters, and physical presence. It permits the ANDing and ORing of these authorization primitives to construct complex authorization policies. Overview The Trusted Platform Module (TPM) provides: A hardware random number generator Facilities for the secure generation of cryptographic keys for limited uses. Remote attestation: Creates a nearly unforgeable hash key summary of the hardware and software configuration. One could use the hash to verify that the hardware and software have not been changed. The software in charge of hashing the setup determines the extent of the summary. Binding: Data is encrypted using the TPM bind key, a unique RSA key descended from a storage key. Computers that incorporate a TPM can create cryptographic keys and encrypt them so that they can only be decrypted by the TPM. This process, often called wrapping or binding a key, can help protect the key from disclosure. Each TPM has a master wrapping key, called the storage root key, which is stored within the TPM itself. User-level RSA key containers are stored with the Windows user profile for a particular user and can be used to encrypt and decrypt information for applications that run under that specific user identity. Sealed storage: Specifies the TPM state for the data to be decrypted (unsealed). Other Trusted Computing functions for the data to be decrypted (unsealed). Computer programs can use a TPM for the authentication of hardware devices, since each TPM chip has a unique and secret Endorsement Key (EK) burned in as it is produced. Security embedded in hardware provides more protection than a software-only solution. Its use is restricted in some countries. Uses Platform integrity The primary scope of TPM is to ensure the integrity of a platform during boot time. In this context, "integrity" means "behaves as intended", and a "platform" is any computer device regardless of its operating system. This is to ensure that the boot process starts from a trusted combination of hardware and software, and continues until the operating system has fully booted and applications are running. When TPM is used, the firmware and the operating system are responsible for ensuring integrity. For example, the Unified Extensible Firmware Interface (UEFI) can use TPM to form a root of trust: The TPM contains several Platform Configuration Registers (PCRs) that allow secure storage and reporting of security-relevant metrics. These metrics can be used to detect changes to previous configurations and decide how to proceed. Examples of such use can be found in Linux Unified Key Setup (LUKS), BitLocker and PrivateCore vCage memory encryption. (See below.) Another example of platform integrity via TPM is in the use of Microsoft Office 365 licensing and Outlook Exchange. Another example of TPM use for platform integrity is the Trusted Execution Technology (TXT), which creates a chain of trust. It could remotely attest that a computer is using the specified hardware and software. Disk encryption Full disk encryption utilities, such as dm-crypt, can use this technology to protect the keys used to encrypt the computer's storage devices and provide integrity authentication for a trusted boot pathway that includes firmware and the boot sector. Implementations Laptops and notebooks In 2006 new laptops began being sold with a built-in TPM chip. In the future, this concept could be co-located on an existing motherboard chip in computers, or any other device where the TPM facilities could be employed, such as a cellphone. On a PC, either the Low Pin Count (LPC) bus or the Serial Peripheral Interface (SPI) bus is used to connect to the TPM chip. The Trusted Computing Group (TCG) has certified TPM chips manufactured by Infineon Technologies, Nuvoton, and STMicroelectronics, having assigned TPM vendor IDs to Advanced Micro Devices, Atmel, Broadcom, IBM, Infineon, Intel, Lenovo, National Semiconductor, Nationz Technologies, Nuvoton, Qualcomm, Rockchip, Standard Microsystems Corporation, STMicroelectronics, Samsung, Sinosun, Texas Instruments, and Winbond. TPM 2.0 There are five different types of TPM 2.0 implementations (listed in order from most to least secure): Discrete TPMs are dedicated chips that implement TPM functionality in their own tamper resistant semiconductor package. They are the most secure, certified to FIPS-140 with level 3 physical security resistance to attack versus routines implemented in software, and their packages are required to implement some tamper resistance. For example, the TPM for the brake controller in a car is protected from hacking by sophisticated methods. Integrated TPMs are part of another chip. While they use hardware that resists software bugs, they are not required to implement tamper resistance. Intel has integrated TPMs in some of its chipsets. Firmware TPMs (fTPMs) are firmware-based (e.g. UEFI) solutions that run in a CPU's trusted execution environment. Intel, AMD and Qualcomm have implemented firmware TPMs. Virtual TPMs (vTPMs) are provided by and rely on hypervisors in isolated execution environments that are hidden from the software running inside virtual machines to secure their code from the software in the virtual machines. They can provide a security level comparable to a firmware TPM. Google Cloud Platform has implemented vTPM. Software TPMs are software emulators of TPMs that run with no more protection than a regular program gets within an operating system. They depend entirely on the environment that they run in, so they provide no more security than what can be provided by the normal execution environment. They are useful for development purposes. Open source The official TCG reference implementation of the TPM 2.0 Specification has been developed by Microsoft. It is licensed under BSD License and the source code is available on GitHub. In 2018 Intel open-sourced its Trusted Platform Module 2.0 (TPM2) software stack with support for Linux and Microsoft Windows. The source code is hosted on GitHub and licensed under BSD License. Infineon funded the development of an open source TPM middleware that complies with the Software Stack (TSS) Enhanced System API (ESAPI) specification of the TCG. It was developed by Fraunhofer Institute for Secure Information Technology (SIT). IBM's Software TPM 2.0 is an implementation of the TCG TPM 2.0 specification. It is based on the TPM specification Parts 3 and 4 and source code donated by Microsoft. It contains additional files to complete the implementation. The source code is hosted on SourceForge and GitHub and licensed under BSD License. In 2022, AMD announced that under certain circumstances their fTPM implementation causes performance problems. A fix is available in form of a BIOS-Update. Reception The Trusted Computing Group (TCG) has faced resistance to the deployment of this technology in some areas, where some authors see possible uses not specifically related to Trusted Computing, which may raise privacy concerns. The concerns include the abuse of remote validation of software decides what software is allowed to run and possible ways to follow actions taken by the user being recorded in a database, in a manner that is completely undetectable to the user. The TrueCrypt disk encryption utility, as well as its derivative VeraCrypt, do not support TPM. The original TrueCrypt developers were of the opinion that the exclusive purpose of the TPM is "to protect against attacks that require the attacker to have administrator privileges, or physical access to the computer". The attacker who has physical or administrative access to a computer can circumvent TPM, e.g., by installing a hardware keystroke logger, by resetting TPM, or by capturing memory contents and retrieving TPM-issued keys. The condemning text goes so far as to claim that TPM is entirely redundant. The VeraCrypt publisher has reproduced the original allegation with no changes other than replacing "TrueCrypt" with "VeraCrypt". The author is right that, after achieving either unrestricted physical access or administrative privileges, it is only a matter of time before other security measures in place are bypassed. However, stopping an attacker in possession of administrative privileges has never been one of the goals of TPM (see for details), and TPM can stop some physical tampering. In 2015 Richard Stallman suggested to replace the term "Trusted computing" with the term "Treacherous computing" due to the danger that the computer can be made to systematically disobey its owner if the cryptographical keys are kept secret from them. He also considers that TPMs available for PCs in 2015 are not currently dangerous and that there is no reason not to include one in a computer or support it in software due to failed attempts from the industry to use that technology for DRM, but that the TPM2 released in 2022 is precisely the "treacherous computing" threat he had warned of. In August 2023, Linus Torvalds, who was frustrated with AMD fTPM's stuttering bugs opined, "Let's just disable the stupid fTPM hwrnd thing." He said the CPU-based random number generation, rdrand was equally suitable, despite having its share of bugs. Writing for Neowin, Sayan Sen quoted Torvalds' bitter comments and called him "a man with a strong opinion." Security issues In 2010 Christopher Tarnovsky presented an attack against TPMs at Black Hat Briefings, where he claimed to be able to extract secrets from a single TPM. He was able to do this after 6 months of work by inserting a probe and spying on an internal bus for the Infineon SLE 66 CL PC. In case of physical access, computers with TPM 1.2 are vulnerable to cold boot attacks as long as the system is on or can be booted without a passphrase from shutdown, sleep or hibernation, which is the default setup for Windows computers with BitLocker full disk encryption. A fix was proposed, which has been adopted in the specifications for TPM 2.0. In 2009, the concept of shared authorisation data in TPM 1.2 was found to be flawed. An adversary given access to the data could spoof responses from the TPM. A fix was proposed, which has been adopted in the specifications for TPM 2.0. In 2015 as part of the Snowden revelations, it was revealed that in 2010 a US CIA team claimed at an internal conference to have carried out a differential power analysis attack against TPMs that was able to extract secrets. Main Trusted Boot (tboot) distributions before November 2017 are affected by a dynamic root of trust for measurement (DRTM) attack , which affects computers running on Intel's Trusted eXecution Technology (TXT) for the boot-up routine. In October 2017, it was reported that a code library developed by Infineon, which had been in widespread use in its TPMs, contained a vulnerability, known as ROCA, which generated weak RSA key pairs that allowed private keys to be inferred from public keys. As a result, all systems depending upon the privacy of such weak keys are vulnerable to compromise, such as identity theft or spoofing. Cryptosystems that store encryption keys directly in the TPM without blinding could be at particular risk to these types of attacks, as passwords and other factors would be meaningless if the attacks can extract encryption secrets. Infineon has released firmware updates for its TPMs to manufacturers who have used them. In 2018, a design flaw in the TPM 2.0 specification for the static root of trust for measurement (SRTM) was reported (). It allows an adversary to reset and forge platform configuration registers which are designed to securely hold measurements of software that are used for bootstrapping a computer. Fixing it requires hardware-specific firmware patches. An attacker abuses power interrupts and TPM state restores to trick TPM into thinking that it is running on non-tampered components. In 2021, the Dolos Group showed an attack on a discrete TPM, where the TPM chip itself had some tamper resistance, but the other endpoints of its communication bus did not. They read a full-disk-encryption key as it was transmitted across the motherboard, and used it to decrypt the laptop's SSD. Availability Currently, a TPM is provided by nearly all PC and notebook manufacturers in their products. Vendors include: Infineon provides both TPM chips and TPM software, which are delivered as OEM versions with new computers as well as separately by Infineon for products with TPM technology which comply with TCG standards. For example, Infineon licensed TPM management software to Broadcom Corp. in 2004. Microchip (formerly Atmel) manufactured TPM devices that it claims to be compliant to the Trusted Platform Module specification version 1.2 revision 116 and offered with several interfaces (LPC, SPI, and I2C), modes (FIPS 140-2 certified and standard mode), temperature grades (commercial and industrial), and packages (TSSOP and QFN). Its TPMs support PCs and embedded devices. It also provides TPM development kits to support integration of its TPM devices into various embedded designs. Nuvoton Technology Corporation provides TPM devices for PC applications. Nuvoton also provides TPM devices for embedded systems and Internet of Things (IoT) applications via I2C and SPI host interfaces. Nuvoton's TPM complies with Common Criteria (CC) with assurance level EAL 4 augmented with ALC_FLR.1, AVA_VAN.4 and ALC_DVS.2, FIPS 140-2 level 2 with Physical Security and EMI/EMC level 3 and Trusted Computing Group Compliance requirements, all supported within a single device. TPMs produced by Winbond are now part of Nuvoton. STMicroelectronics has provided TPMs for PC platforms and embedded systems since 2005. The product offering includes discrete devices with several interfaces supporting Serial Peripheral Interface (SPI) and I²C and different qualification grades (consumer, industrial and automotive). The TPM products are Common Criteria (CC) certified EAL4+ augmented with ALC_FLR.1 and AVA_VAN.5, FIPS 140-2 level 2 certified with physical security level 3 and also Trusted Computing Group (TCG) certified. There are also hybrid types; for example, TPM can be integrated into an Ethernet controller, thus eliminating the need for a separate motherboard component. Field upgrade Field upgrade is the TCG term for updating the TPM firmware. The update can be between TPM 1.2 and TPM 2.0, or between firmware versions. Some vendors limit the number of transitions between 1.2 and 2.0, and some restrict rollback to previous versions. Platform OEMs such as HP supply an upgrade tool. Since July 28, 2016, all new Microsoft device models, lines, or series (or updating the hardware configuration of an existing model, line, or series with a major update, such as CPU, graphic cards) implement, and enable by default TPM 2.0. While TPM 1.2 parts are discrete silicon components, which are typically soldered on the motherboard, TPM 2.0 is available as a discrete (dTPM) silicon component in a single semiconductor package, an integrated component incorporated in one or more semiconductor packages - alongside other logic units in the same package(s), and as a firmware (fTPM) based component running in a trusted execution environment (TEE) on a general purpose System-on-a-chip (SoC). Virtual TPM Google Compute Engine offers virtualized TPMs (vTPMs) as part of Google Cloud's Shielded VMs product. The libtpms library provides software emulation of a Trusted Platform Module (TPM 1.2 and TPM 2.0). It targets the integration of TPM functionality into hypervisors, primarily into Qemu. Operating systems Windows 11 requires TPM 2.0 support as a minimum system requirement. On many systems TPM is disabled by default which requires changing settings in the computer's UEFI to enable it. Windows 8 and later have native support for TPM 2.0. Windows 7 can install an official patch to add TPM 2.0 support. Windows Vista through Windows 10 have native support for TPM 1.2. The Trusted Platform Module 2.0 (TPM 2.0) has been supported by the Linux kernel since version 3.20 (2012) Platforms Google includes TPMs in Chromebooks as part of their security model. Oracle ships TPMs in their X- and T-Series Systems such as T3 or T4 series of servers. Support is included in Solaris 11. In 2006, with the introduction of first Macintosh models with Intel processors, Apple started to ship Macs with TPM. Apple never provided an official driver, but there was a port under GPL available. Apple has not shipped a computer with TPM since 2006. In 2011, Taiwanese manufacturer MSI launched its Windpad 110W tablet featuring an AMD CPU and Infineon Security Platform TPM, which ships with controlling software version 3.7. The chip is disabled by default but can be enabled with the included, pre-installed software. Virtualization VMware ESXi hypervisor has supported TPM since 4.x, and from 5.0 it is enabled by default. Xen hypervisor has support of virtualized TPMs. Each guest gets its own unique, emulated, software TPM. KVM, combined with QEMU, has support for virtualized TPMs. , it supports passing through the physical TPM chip to a single dedicated guest. QEMU 2.11 released in December 2017 also provides emulated TPMs to guests. VirtualBox has support for virtual TPM 1.2 and 2.0 devices starting with version 7.0 released in October 2022. Software Microsoft operating systems Windows Vista and later use the chip in conjunction with the included disk encryption component named BitLocker. Microsoft had announced that from January 1, 2015, all computers will have to be equipped with a TPM 2.0 module in order to pass Windows 8.1 hardware certification. However, in a December 2014 review of the Windows Certification Program this was instead made an optional requirement. However, TPM 2.0 is required for connected standby systems. Virtual machines running on Hyper-V can have their own virtual TPM module starting with Windows 10 1511 and Windows Server 2016. Microsoft Windows includes two TPM related commands: , a utility that can be used to retrieve information about the TPM, and , a command-line tool that allows creating and deleting TPM virtual smart cards on a computer. Endorsement keys TPM endorsement keys (EKs) are asymmetric key pairs unique to each TPM. They use the RSA and ECC algorithms. The TPM manufacturer usually provisions endorsement key certificates in TPM non-volatile memory. The certificates assert that the TPM is authentic. Starting with TPM 2.0, the certificates are in X.509 DER format. These manufacturers typically provide their certificate authority root (and sometimes intermediate) certificates on their web sites. AMD Infineon Intel NationZ Nuvoton ST Micro Software libraries To utilize a TPM, the user needs a software library that communicates with the TPM and provides a friendlier API than the raw TPM communication. Currently, there are several such open-source TPM 2.0 libraries. Some of them also support TPM 1.2, but mostly TPM 1.2 chips are now deprecated and modern development is focused on TPM 2.0. Typically, a TPM library provides an API with one-to-one mappings to TPM commands. The TCG specification calls this layer the System API(SAPI). This way the user has more control over the TPM operations, however the complexity is high. To hide some of the complexity most libraries also offer simpler ways to invoke complex TPM operations. The TCG specification call these two layers Enhanced System API(ESAPI) and Feature API(FAPI). There is currently only one stack that follows the TCG specification. All the other available open-source TPM libraries use their own form of richer API. These TPM libraries are sometimes also called TPM stacks, because they provide the interface for the developer or user to interact with the TPM. As seen from the table, the TPM stacks abstract the operating system and transport layer, so the user could migrate one application between platforms. For example, by using TPM stack API the user would interact the same way with a TPM, regardless if the physical chip is connected over SPI, I2C or LPC interface to the Host system. See also AMD Platform Security Processor ARM TrustZone Crypto-shredding Hardware security Hardware security module Hengzhi chip Intel Management Engine Microsoft Pluton Next-Generation Secure Computing Base Secure Enclave Threat model References Computer hardware standards Computer security hardware Cryptographic hardware Cryptographic software Cryptography standards ISO standards Random number generation Trusted computing
Trusted Platform Module
[ "Mathematics", "Technology", "Engineering" ]
4,884
[ "Cybersecurity engineering", "Computer security hardware", "Computer standards", "Trusted computing", "Cryptographic software", "Computer hardware standards", "Mathematical software" ]
2,936,950
https://en.wikipedia.org/wiki/Common-pool%20resource
In economics, a common-pool resource (CPR) is a type of good consisting of a natural or human-made resource system (e.g. an irrigation system or fishing grounds), whose size or characteristics makes it costly, but not impossible, to exclude potential beneficiaries from obtaining benefits from its use. Unlike pure public goods, common pool resources face problems of congestion or overuse, because they are subtractable. A common-pool resource typically consists of a core resource (e.g. water or fish), which defines the stock variable, while providing a limited quantity of extractable fringe units, which defines the flow variable. While the core resource is to be protected or nurtured in order to allow for its continuous exploitation, the fringe units can be harvested or consumed. Examples of a Common-Pool Resource Common-pool goods are typically regulated and nurtured in order to prevent demand from overwhelming supply and allow for their continued exploitation. Examples of common-pool resources include forests, man-made irrigation systems, fishing grounds, and groundwater basins. For instance, fishermen have an incentive to harvest as many fish as possible because if they do not, someone else will—so without management and regulation, fish stocks soon become depleted. And while a river might supply many cities with drinking water, manufacturing plants might be tempted to pollute the river if they were not prohibited from doing so by law because someone else would bear the costs. In California, where there is a huge demand for surface water but supplies are limited, common pool problems are exacerbated because the state does not manage groundwater basins at the state level. During the 2012-2016 drought, farmers with senior water rights dating back to the 19th century could use as much water as they wanted, while cities and towns had to make drastic cutbacks to water use. In James Bay, Quebec, the beaver was an important species for food and later commerce when the fur trade started in 1670. Amerindian groups in the area have traditionally used resources communally and have a heritage of customary laws to regulate hunting. However, in the 1920s the railroads caused a large influx of non-native trappers who took advantage of the high fur prices and the indigenous people losing control of their territories. Both non-native and native trappers contributed to the decline of the beaver population, prompting conservation laws to be enacted after 1930 and outsiders being banned from trapping in James Bay. Eventually, Amerindian communities and family territories were legally recognized, and customary laws became enforceable. This restoration of local control allowed for the beaver population to recover. Since 1947, the Maine lobster catch has been remarkably stable despite predictions of resource collapse. The state government has regulations in place but does not limit the number of licenses. The exclusion to this CPR is done through a system of traditional fishing rights which makes it so that one needs to be accepted by the community to be able to go lobster fishing. Those in a community are restricted to fishing in the territory held by that community. This is enforced by surreptitious violence towards interlopers. Fishermen in these exclusive territories catch significantly more and larger lobsters with less effort than those in areas where territories overlap. In the New York Bight region, a cooperative of trawl fishermen that specializes in harvesting whiting limits entry into the local fishery and establishes catch quotas among members. These quotas are based on regional market sales estimations and attempt to encourage initiative while discouraging “free-riding.” They limit entry to the whiting grounds and markets through a closed membership policy and by controlling the dock space. Due to these methods, they have access to the best whiting grounds, dominate the market during winter, and can maintain relatively high prices through supply management. The fishermen consider this type of self-regulation both flexible and effective in maintaining sustainable use. Common property systems A common property rights regime system (not to be confused with a common-pool resource) is a particular social arrangement regulating the preservation, maintenance, and consumption of a common-pool resource. The use of the term "common property resource" to designate a type of good has been criticized, because common-pool resources are not necessarily governed by common property protocols. Examples of common-pool resources include irrigation systems, fishing grounds, pastures, forests, water or the atmosphere. A pasture, for instance, allows for a certain amount of grazing to occur each year without the core resource being harmed. In the case of excessive grazing, however, the pasture may become more prone to erosion and eventually yield less benefit to its users. Because the core resources are vulnerable, common-pool resources are generally subject to problems of congestion, overuse, pollution, and potential destruction unless harvesting or use limits are devised and enforced. Resource systems like pastoral areas, fishing grounds, forest areas are storage variables. Under favorable conditions, they can maximize the flow without harming the total storage volume and the entire resource system. Different from the resource system, the resource unit is the amount that an individual occupies or uses from the resource system, such as the total amount of fish caught in a fishing ground, the amount of feed consumed by livestock in pastoral areas. A resource system allows multiple people or enterprise to produce at the same time, and the process of using common-pool resources can be performed simultaneously by multiple occupants. However, the resource unit cannot be used by multiple people or enterprises at the same time. Management The use of many common-pool resources, if managed carefully, can be extended because the resource system forms a negative feedback loop, where the stock variable continually regenerates the fringe variable as long as the stock variable is not compromised, providing an optimum amount of consumption. However, consumption exceeding the fringe value reduces the stock variable, which in turn decreases the flow variable. If the stock variable is allowed to regenerate then the fringe and flow variables may also recover to initial levels, but in many cases the loss is irreparable. Ownership Common-pool resources may be owned by national, regional or local governments as public goods, by communal groups as common property resources, or by private individuals or corporations as private goods. When they are owned by no one, they are used as open access resources. Having observed a number of common pool resources throughout the world, Elinor Ostrom noticed that a number of them are governed by common property protocols — arrangements different from private property or state administration — based on self-management by a local community. Her observations contradict claims that common-pool resources must be privatized or else face destruction in the long run due to collective action problems leading to the overuse of the core resource (see also Tragedy of the commons). Definition matrix Common property protocols Common property systems of management arise when users acting independently threaten the total net benefit from common-pool resource. In order to maintain the resources, protocols coordinate strategies to maintain the resource as a common property instead of dividing it up into parcels of private property. Common property systems typically protect the core resource and allocate the fringe resources through complex community norms of consensus decision-making. Common resource management has to face the difficult task of devising rules that limit the amount, timing, and technology used to withdraw various resource units from the resource system. Setting the limits too high would lead to overuse and eventually to the destruction of the core resource while setting the limits too low would unnecessarily reduce the benefits obtained by the users. In common property systems, access to the resource is not free and common-pool resources are not public goods. While there is relatively free but monitored access to the resource system for community members, there are mechanisms in place which allow the community to exclude outsiders from using its resource. Thus, in a common property state, a common-pool resource appears as a private good to an outsider and as a common good to an insider of the community. The resource units withdrawn from the system are typically owned individually by the appropriators. A common property good is rivaled in consumption. Analysing the design of long-enduring CPR institutions, Elinor Ostrom identified eight design principles which are prerequisites for a stable CPR arrangement: Clearly defined boundaries Congruence between appropriation and provision rules and local conditions Collective-choice arrangements allowing for the participation of most of the appropriators in the decision making process Effective monitoring by monitors who are part of or accountable to the appropriators Graduated sanctions for appropriators who do not respect community rules Conflict-resolution mechanisms which are cheap and easy to access Minimal recognition of rights to organize (e.g., by the government) In case of larger CPRs: Organisation in the form of multiple layers of nested enterprises, with small, local CPRs at their bases. Common property systems typically function at a local level to prevent the overexploitation of a resource system from which fringe units can be extracted. In some cases, government regulations combined with tradable environmental allowances (TEAs) are used successfully to prevent excessive pollution, whereas in other cases — especially in the absence of a unique government being able to set limits and monitor economic activities — excessive use or pollution continue. Adaptive governance The management of common-pool resources is highly dependent upon the type of resource involved. An effective strategy at one location, or of one particular resource, may not be necessarily appropriate for another. In The Challenge of Common-Pool Resources, Ostrom makes the case for adaptive governance as a method for the management of common-pool resources. Adaptive governance is suited to dealing with problems that are complex, uncertain and fragmented, as is the management of common-pool resources. Ostrom outlines five basic protocol requirements for achieving adaptive governance. These include: Achieving accurate and relevant information, by focusing on the creation and use of timely scientific knowledge on the part of both the managers and the users of the resource Dealing with conflict, acknowledging the fact that conflicts will occur, and having systems in place to discover and resolve them as quickly as possible Enhancing rule compliance, through creating responsibility for the users of a resource to monitor usage Providing infrastructure, that is flexible over time, both to aid internal operations and create links to other resources Encouraging adaption and change to address errors and cope with new developments Influential factors in the management of common-pool resources A new proposal of the management of CPR is to develop autonomous organizations that are not completely privatized and controlled by government power, which led and supervised by the community to manage common-pool resources in addition to directly through the government and the free market. There are many factors that may affect the formation and development of these kinds of autonomous organizations. Effectively identifying the influencing factors of the autonomous management system of CPRs increasing the feasibility of the system, and it is more conducive to the sustainable use of resources as well. In general, there are four variables that are very important for local common-pool resource management: (1) characteristics of the resource; (2) characteristics of the resource-dependent group; (3) institutional model of resource management; (4) the relationship between groups, external forces, and authorities. The government, market and interest groups are all considered as external forces that have an impact on CPR management system. Changes in market demand for CPR, in particular, technological innovation increases productivity and lowers costs, which undermines the sustainability of the management system. In order to develop more resources, resource owners may seek to change the ownership of resources in the form of cooperation with the government, privatize CPR or even cancel the protection of CPR ownership by regulations. Such institutional changes prevent the implementation of policies that are beneficial to the majority of the population, while the power of the government and bureaucracy can be abused. The community is responsible for supervising and administering CPR under an autonomous management system, the characteristics of a community can affect how CPR is managed. (1) the size of the community. The level of cooperation decreases as the number of community members grows; (2) Allocation mechanism for CPR. Encouraging the exploitation of the least used resources and reducing the exploitation of the most used resources will effectively increase the rate of resource supply and reduce the rate of resource consumption and individual demand. (3) Group identity. When people in a community have a strong sense of group identity, it helps to manage CPR within the community. Experimental Studies on Common Pool Resource Games Common Pool Resource (CPR) games have been a focal point in experimental research, providing insights into the dynamics and dilemmas associated with communal resource management. A foundational work introduced a conceptual framework that elucidates the strategic content of CPR dilemmas, demonstrating how theoretical constructs, such as the Prisoner's Dilemma and coordination games, apply to these behavioral challenges. Further, field experiments involving specific ecological features of CPRs, such as water irrigation, forestry, and fisheries, have revealed the impact of various resource-specific dynamics on collective action and resource management. Additionally, a study explored the external validity of CPR laboratory experiments within the context of artisanal benthic fisheries in Chile, revealing a correlation between cooperative behaviors exhibited in laboratory settings and those in real-world co-managed and open-access fisheries. These studies collectively underscore the complexity of CPR dilemmas and highlight the nuanced interplay between individual strategies, collective action, and resource sustainability, providing a multifaceted understanding of cooperation and norm internalization in the management of communal resources. Open access resources In economics, open access resources are, for the most part, rivalrous, non-excludable goods. This makes them similar to common goods during times of prosperity. Unlike many common goods, open access goods require little oversight or may be difficult to restrict access. However, as these resources are first come, first served, they may be affected by the phenomenon of the tragedy of the commons. Two possibilities may follow: a common property or an open access system. However, in a different setting, such as fishing, there will be drastically different consequences. Since fish are an open access resource, it is relatively simple to fish and profit. If fishing becomes profitable, there will be more fishers and fewer fish. Fewer fish lead to higher prices which will lead again to more fishers, as well as lower reproduction of fish. This is a negative externality and an example of problems that arise with open access goods. See also Carrying capacity Common good (economics) Commons Enclosure Exploitation of natural resources Global commons Knowledge commons Occupancy-abundance relationship Overexploitation Tragedy of the commons Tyranny of small decisions References Citations Bibliography Araral, Eduardo. (2014). Ostrom, Hardin and the Commons. A Critical Appreciation and Revisionist View. Env Science and Policy. Volume 36, Pages 1–92 (February 2014) Acheson, James, M. (1988) The Lobster Gangs of Maine. Anderson, Terry L., Grewell, J. Bishop (2000) "Property Rights Solutions for the Global Commons: Bottom-Up or Top-Down?" In: Duke Environmental Law & Policy Forum, Vol. X, No. 2, Spring 2000. Baland, Jean-Marie and Jean-Philippe Platteau (1996) Halting Degradation of Natural Resources: Is There a Role for Rural Communities? Daniels, Brigham (2007) "Emerging Commons and Tragic Institutions," Environmental Law, Vol. 37. Hess, C. and Ostrom, E. (2003), "Ideas, Artifacts, and Facilities: Information as a Common-Pool Resource", Law and Contemporary Problems 66, S. 111–146. Hess, C. and Ostrom, E. (2001), "Artifacts, Facilities, And Content: Information as a Common-pool Resource", Workshop in Political Theory and Policy Analysis. Meinzen-Dick, Ruth, Esther Mwangi, Stephan Dohrn. 2006. Securing the Commons. CAPRi Policy Brief 4. Washington DC: IFPRI. Ostrom, Elinor (2003) "How Types of Goods and Property Rights Jointly Affect Collective Action", Journal of Theoretical Politics, Vol. 15, No. 3, 239-270 (2003). Ostrom, Elinor (1990) "Governing the Commons. The Evolution of Institutions for Collective Action". Cambridge University Press. Ostrom, Elinor, Roy Gardner, and James Walker (1994) Rules, Games, and Common-Pool Resources. University of Michigan Press. 1994. Rose, Carol M. (2000) "Expanding the Choices for the Global Commons: Comparing Newfangled Tradable allowance schemes to Old-Fashioned Common Property Regimes". In: Duke Environmental Law & Policy Forum, Vol. X, No. 2, Spring 2000. Saunders, Pammela Q. (2011) "A Sea Change Off the Coast of Maine: Common Pool Resources as Cultural Property". In: Emory Law Journal, Vol. 60, No. 6, June 2011. Thompson, Jr., Barton H. (2000) "Tragically Difficult: The Obstacles to Governing the Commons" Environmental Law 30:241. External links Digital Library of the Commons Public vs. Private Goods Market failure Property Environmental social science concepts
Common-pool resource
[ "Environmental_science" ]
3,496
[ "Environmental social science concepts", "Environmental social science" ]
2,937,023
https://en.wikipedia.org/wiki/Canterbury%20corpus
The Canterbury corpus is a collection of files intended for use as a benchmark for testing lossless data compression algorithms. It was created in 1997 at the University of Canterbury, New Zealand and designed to replace the Calgary corpus. The files were selected based on their ability to provide representative performance results. Contents In its most commonly used form, the corpus consists of 11 files, selected as "average" documents from 11 classes of documents, totaling 2,810,784 bytes as follows. The University of Canterbury also offers the following corpora. Additional files may be added, so results should be only reported for individual files. The Artificial Corpus, a set of files with highly "artificial" data designed to evoke pathological or worst-case behavior. Last updated 2000 (tar timestamp). The Large Corpus, a set of large (megabyte-size) files. Contains an E. coli genome, a King James bible, and the CIA world fact book. Last updated 1997 (tar timestamp). The Miscellaneous Corpus. Contains one million digits of pi. Last updated 2000 (tar timestamp). See also Data compression References External links Data compression Test items
Canterbury corpus
[ "Technology" ]
240
[ "Computing stubs", "Computer science", "Computer science stubs" ]
2,937,259
https://en.wikipedia.org/wiki/Concordance%20%28genetics%29
In genetics, concordance is the probability that a pair of individuals will both have a certain characteristic (phenotypic trait) given that one of the pair has the characteristic. Concordance can be measured with concordance rates, reflecting the odds of one person having the trait if the other does. Important clinical examples include the chance of offspring having a certain disease if the mother has it, if the father has it, or if both parents have it. Concordance among siblings is similarly of interest: what are the odds of a subsequent offspring having the disease if an older child does? In research, concordance is often discussed in the context of both members of a pair of twins. Twins are concordant when both have or both lack a given trait. The ideal example of concordance is that of identical twins, because the genome is the same, an equivalence that helps in discovering causation via deconfounding, regarding genetic effects versus epigenetic and environmental effects (nature versus nurture). In contrast, discordance occurs when a similar trait is not shared by the persons. Studies of twins have shown that genetic traits of monozygotic twins are fully concordant, whereas in dizygotic twins, half of genetic traits are concordant, while the other half are discordant. Discordant rates that are higher than concordant rates express the influence of the environment on twin traits. Studies A twin study compares the concordance rate of identical twins to that of fraternal twins. This can help suggest whether a disease or a certain trait has a genetic cause. Controversial uses of twin data have looked at concordance rates for homosexuality and intelligence. Other studies have involved looking at the genetic and environmental factors that can lead to increased LDL in women twins. Because identical twins are genetically virtually identical, it follows that a genetic pattern carried by one would very likely also be carried by the other. If a characteristic identified in one twin is caused by a certain gene, then it would also very likely be present in the other twin. Thus, the concordance rate of a given characteristic helps suggest whether or to what extent a characteristic is related to genetics. There are several problems with this assumption: A given genetic pattern may not have 100% penetrance, in which case it may have different phenotypic consequences in genetically identical individuals; Developmental and environmental conditions may be different for genetically identical individuals. If developmental and environmental conditions contribute to the development of the disease or other characteristic, there can be differences in the outcome of genetically identical individuals; The logic is further complicated if the characteristic is polygenic, i.e., caused by differences in more than one gene. Epigenetic effects can alter the genetic expressions in twins through varied factors. The expression of the epigenetic effect is typically weakest when the twins are young and increases as the identical twins grow older. Where in the absence of one or more environmental factors a condition will not develop in an individual, even with high concordance rates, the proximate cause is environmental, with strong genetic influence: thus "a substantial role of genetic factors does not preclude the possibility that the development of the disease can be modified by environmental intervention." So "genetic factors are assumed to contribute to the development of that disease", but cannot be assumed alone to be causal. Genotyping studies In genotyping studies where DNA is directly assayed for positions of variance (see SNP), concordance is a measure of the percentage of SNPs that are measured as identical. Samples from the same individual or identical twins theoretically have a concordance of 100%, but due to assaying errors and somatic mutations, they are usually found in the range of 99% to 99.95%. Concordance can therefore be used as a method of assessing the accuracy of a genotyping assay platform. Because a child inherits half of his or her DNA from each parent, parents and children, siblings, and fraternal (dizygotic) twins have a concordance that averages 50% using this measure. See also Michigan State University Twin Registry References Genetics studies Genetics terms
Concordance (genetics)
[ "Biology" ]
846
[ "Genetics terms" ]
2,937,533
https://en.wikipedia.org/wiki/Basal%20cell
Overview A basal cell is a general cell type that is present in many forms of epithelial tissue throughout the body. Basal cells are located between the basement membrane and the remainder of the epithelium, effectively functioning as an anchor for the epithelial layer and an important mechanism in the maintenance of intraorgan homeostasis. Basal cells can interact with surrounding cells including neurons, the basement membrane, columnar epithelium, and underlying mesenchymal cells. They also engage in interactions with dendritic, lymphocytic, and inflammatory cells, with the majority of these interactions occurring in the lateral intercellular gap between basal cells. Basal cells have important health implications since the most common types of skin cancer are basal cell and squamous cell carcinomas. More than 1 million instances of these cancers, referred to as non-melanoma skin cancers (NMSC) are expected to be diagnosed in the United States each year, and the incidence is rapidly increasing. Basal and squamous cell malignancies, while seldom metastatic, can cause significant local damage and disfigurement, affecting large sections of soft tissue, cartilage, and bone. Location Basal cells are located in various tissues throughout the body. They are located at the bottom of epithelial tissues, generally situated directly on top of the basal lamina, above the basement membrane and below the remainder of the epithelium. Examples include: Epidermal cells in the stratum basale Airway basal cells, which are respiratory cells located in the respiratory epithelium (found in decreasing concentrations as airway diameter decreases) Basal cells of prostate glands Basal cells of the Gastrointestinal tract mucosal layer Structure Regardless of their specific location, basal cells generally share a similar basic structure. They are all usually either cuboidal, polyhedral or pyramidal shaped cells with enlarged nuclei and minimal cytoplasm. Basal cells are bound to each other by desmosomes, and to the basal lamina of the basement membrane by hemidesmosomes. These junctions help to create one tightly bound, continuous tissue layer that can endure mechanical stress and effectively function as a connection between the basement membrane and remaining epithelial tissue. Function Basal cells serve two main functions in cells. They serve: To anchor and connect the epithelium to the basement membrane As the main stem cell population for the tissue they are found in, therefore responding to stimuli to maintain homeostasis within that tissue While all basal cells, regardless of location, function similar in regards to anchoring the epithelium, the specific function and mechanisms of basal cells as stem cells varies by location. In general, basal cells can can function as either unipotent or multipotent stem cells. Epidermal basal cells In the epidermis, basal cells function as unipotent stem cells. Found in the lowest layer of the epidermis, the stratum basale, basal cells continuously divide in order to replenish the squamous cells that make up the skin's surface. Every time a basal cell divides, it creates two daughter cells, one is an identical basal cell, and the other is a new somatic cell that undergoes terminal differentiation. These cells gradually get pushed up through the layers of the epidermis by the constant proliferation of more new cells, gradually differentiating and flattening as they rise. This ultimately results in functional squamous cells on the outermost layer of the epidermis, the most abundant of which are called keratinocytes. The continuous division of epidermal basal cells leads to complete epidermal turnover every 40-56 days in humans and every 8-10 days in mice. This process of proliferation and differentiation is regulated by multiple genetic and environmental factors including a calcium gradient, Vitamins A and D, epidermal growth factor (EGF), transcription factor p63, and transforming growth factor alpha (TGF-α). Errors in the regulatory mechanisms of epidermal basal cells can cause a variety of acute and chronic ailments including psoriasis and basal cell carcinoma, which is the most common type of skin cancer, accounting for 80% of all skin cancer cases. Due to the structural importance of the epidermis, defects in basal cell proliferation and differentiation can also contribute to deformities such as cleft lips and Gorlin syndrome. Respiratory basal cells In the respiratory tract, basal cells function as multipotent stem cells, capable of replenishing all of the epithelial cell types including secretory, ciliated, and intermediate cells. They reside in the mucosal layer of the respiratory epithelium, and generally remain dormant. However, when a functional epithelial cell becomes damaged, a basal cell is activated to differentiate into the appropriate cell type and replace the damaged cell. In addition to functioning as stem cells, there is novel evidence to suggest that undifferentiated basal cells also contribute immune functions of the respiratory epithelium by secreting RNase. This function helps to preserve the immune capabilities of the respiratory epithelium even when it is damaged and in the process of being repaired. In the respiratory epithelium, there exists a layer of intermediate cells between the basal and differentiated cells. These intermediate cells exist in a transient state. They have begun the process of differentiation, but are not yet terminally differentiated, and as such can differentiate as needed, but have limited proliferative capacity. They play an important role in ensuring that the epithelium can be quickly repaired in response to damage. The process of respiratory basal cell differentiation is regulated by multiple factors including transcription factors such as FOXJ1, FOXA3, Sox2, and p53, proteins such as LEF-1, and interleukins IL-1α and IL-33, as well as other other cytokines. However, the primary control of basal respiratory cell differentiation is the Notch signaling pathway, which is the main determinant of what the basal cell differentiates into. High levels of NOTCH activity leads to differentiation into a secretory cell, whereas low levels lead to differentiation into a ciliated cell. Gastrointestinal basal cells The gastrointestinal tract consists of the esophagus, stomach, small intestines, and large intestines, and each layer is lined with distinct yet similar epithelium that necessarily contains basal cells. While the general function of these basal cells is similar throughout the entire tract, the specific mechanisms, functions, and products of these cells can vary depending on which layer the cells are located in. For example, while basal cells in both the esophagus and the stomach function as multipotent progenitor cells, they are fundamentally different because the esophageal basal cells exist as a part of a stratified squamous epithelium, whereas the gastric basal cells exist as a part of a simple columnar epithelium. Functionally, this means that since a simple epithelium is only one cell thick, differentiated cells must diffuse along the plane of the basement membrane rather than vertically through the rest of the epithelium. Furthermore, the actual products of these cells vary substantially, as esophageal basal cells mainly produce squamous epithelial cells, which function as a passive physical barrier between the lumen of the esophagus and underlying tissues, but gastric basal cells differentiate into a variety of secretory and absorptive cells that provide the main functions of the stomach including absorptive cells, chief cells, and parietal cells. In the stomach, basal cells are generally located in the isthmus region, or near the top, of gastric glands, a location that allows them to easily differentiate within the gland and then diffuse bi-directionally as they differentiate, going either to the above gastric pit or the base of the gastric gland to replenish damaged cells. Due to the harsh environment created by the acidic interior of the stomach, the basal cells propagate continuously, relying on a variety of pathways and signaling molecules to communicate what type of cells have been damaged and need to be replaced. These regulators of proliferation and differentiation include the protein Sox9, the Wnt and Notch signaling pathways, BMP's 2, 4, and 7 (which can all function as tumor suppressors), and EGF. These processes exist in a delicate state, and any errors in or disruptions of these pathways can cause a variety of ailments. For example, a Helicobacter pylori infection can cause an overexpression of EGF which leads to excessive differentiation of basal cells into gastrin cells, which in turn can lead to atrophic gastritis, a well studied precursor to gastric cancer. Furthermore, if the genes coding for Jag1 or Jag2 are mutated or deleted, this can cause a disruption of the critical Notch signaling pathway, which can in turn cause uncontrolled and unregulated growth and differentiation leading to tumorigenesis. Similar to gastric basal cells, intestinal basal cells are continuously propagating. In fact, due to the vital role that the small intestine plays in nutrient absorption, basal cells in the small intestine exhibit the highest turnover rate of any cells in the body, creating an entirely new epithelium approximately every 5-7 days. Within the intestines, basal cells are located at the base of intestinal invaginations known as crypts, where they are nourished and protected by paneth cells and the surrounding microenvironment. These basal cells then function as multipotent progenitors, capable of differentiation into six distinct cell types, regulated by mechanisms very similar to those seen in other gastrointestinal basal cells. As the cells differentiate, they migrate out from the crypt towards the lumen, until eventually dying and being released into the intestinal lumen, only to soon be replaced by a new cell. References Cell biology
Basal cell
[ "Biology" ]
2,077
[ "Cell biology" ]
2,937,664
https://en.wikipedia.org/wiki/Nanophase%20material
Nanophase materials are materials that have grain sizes under 100 nanometres. They have different mechanical and optical properties compared to the large grained materials of the same chemical composition. Transparency and different transparent colours can be achieved with nanophase materials by varying the grain size. Nanophase materials Nanophase metals usually are many times harder but more brittle than regular metals. nanophase copper is a superhard material nanophase aluminum nanophase iron is iron with a grain size in the nanometer range. Nanocrystalline iron has a tensile strength of around 6 GPA, twice that of the best maraging steels. Nanophase ceramics usually are more ductile and less brittle than regular ceramics. Footnotes External links Creating Nanophase Materials. Scientific American (subscription required) Nanophase Materials, Michigan Tech Research on Nanophase Materials, Louisiana State University Materials Materials science
Nanophase material
[ "Physics", "Materials_science", "Engineering" ]
185
[ "Applied and interdisciplinary physics", "Materials stubs", "Materials science", "Materials", "nan", "Matter" ]
2,937,772
https://en.wikipedia.org/wiki/Thermophotovoltaic%20energy%20conversion
Thermophotovoltaic (TPV) energy conversion is a direct conversion process from heat to electricity via photons. A basic thermophotovoltaic system consists of a hot object emitting thermal radiation and a photovoltaic cell similar to a solar cell but tuned to the spectrum being emitted from the hot object. As TPV systems generally work at lower temperatures than solar cells, their efficiencies tend to be low. Offsetting this through the use of multi-junction cells based on non-silicon materials is common, but generally very expensive. This currently limits TPV to niche roles like spacecraft power and waste heat collection from larger systems like steam turbines. General concept PV Typical photovoltaics work by creating a p–n junction near the front surface of a thin semiconductor material. When photons above the bandgap energy of the material hit atoms within the bulk lower layer, below the junction, an electron is photoexcited and becomes free of its atom. The junction creates an electric field that accelerates the electron forward within the cell until it passes the junction and is free to move to the thin electrodes patterned on the surface. Connecting a wire from the front to the rear allows the electrons to flow back into the bulk and complete the circuit. Photons with less energy than the bandgap do not eject electrons. Photons with energy above the bandgap will eject higher-energy electrons which tend to thermalize within the material and lose their extra energy as heat. If the cell's bandgap is raised, the electrons that are emitted will have higher energy when they reach the junction and thus result in a higher voltage, but this will reduce the number of electrons emitted as more photons will be below the bandgap energy and thus generate a lower current. As electrical power is the product of voltage and current, there is a sweet spot where the total output is maximized. Terrestrial solar radiation is typically characterized by a standard known as Air Mass 1.5, or AM1.5. This is very close to 1,000 W of energy per square meter at an apparent temperature of 5780 K. At this temperature, about half of all the energy reaching the surface is in the infrared. Based on this temperature, energy production is maximized when the bandgap is about 1.4 eV, in the near infrared. This just happens to be very close to the bandgap in doped silicon, at 1.1 eV, which makes solar PV inexpensive to produce. This means that all of the energy in the infrared and lower, about half of AM1.5, goes to waste. There has been continuing research into cells that are made of several different layers, each with a different bandgap, and thus tuned to a different part of the solar spectrum. , cells with overall efficiencies in the range of 40% are commercially available, although they are extremely expensive and have not seen widespread use outside of specific roles like powering spacecraft, where cost is not a significant consideration. TPV The same process of photoemission can be used to produce electricity from any spectrum, although the number of semiconductor materials that will have just the right bandgap for an arbitrary hot object is limited. Instead, semiconductors that have tuneable bandgaps are needed. It is also difficult to produce solar-like thermal output; an oxyacetylene torch is about 3400 K (~3126 °C), and more common commercial heat sources like coal and natural gas burn at much lower temperatures around 900 °C to about 1300 °C. This further limits the suitable materials. In the case of TPV most research has focused on gallium antimonide (GaSb), although germanium (Ge) is also suitable. Another problem with lower-temperature sources is that their energy is more spread out, according to Wien's displacement law. While one can make a practical solar cell with a single bandgap tuned to the peak of the spectrum and just ignore the losses in the IR region, doing the same with a lower temperature source will lose much more of the potential energy and result in very low overall efficiency. This means TPV systems almost always use multi-junction cells in order to reach reasonable double-digit efficiencies. Current research in the area aims at increasing system efficiencies while keeping the system cost low, but even then their roles tend to be niches similar to those of multi-junction solar cells. Actual designs TPV systems generally consist of a heat source, an emitter, and a waste heat rejection system. The TPV cells are placed between the emitter, often a block of metal or similar, and the cooling system, often a passive radiator. PV systems in general operate at lower efficiency as the temperature increases, and in TPV systems, keeping the photovoltaic cool is a significant challenge. This contrasts with a somewhat related concept, the "thermoradiative" or "negative emission" cells, in which the photodiode is on the hot side of the heat engine. Systems have also been proposed that use a thermoradiative device as an emitter in a TPV system, theoretically allowing power to be extracted from both a hot photodiode and a cold photodiode. Applications RTGs Conventional radioisotope thermoelectric generators (RTGs) used to power spacecraft use a radioactive material whose radiation is used to heat a block of material and then converted to electricity using a thermocouple. Thermocouples are very inefficient and their replacement with TPV could offer significant improvements in efficiency and thus require a smaller and lighter RTG for any given mission. Experimental systems developed by Emcore (a multi-junction solar cell provider), Creare, Oak Ridge and NASA's Glenn Research Center demonstrated 15 to 20% efficiency. A similar concept was developed by the University of Houston which reached 30% efficiency, a 3 to 4-fold improvement over existing systems. Thermoelectric storage Another area of active research is using TPV as the basis of a thermal storage system. In this concept, electricity being generated in off-peak times is used to heat a large block of material, typically carbon or a phase-change material. The material is surrounded by TPV cells which are in turn backed by a reflector and insulation. During storage, the TPV cells are turned off and the photons pass through them and reflect back into the high-temperature source. When power is needed, the TPV is connected to a load. Waste heat collection TPV cells have been proposed as auxiliary power conversion devices for capture of otherwise lost heat in other power generation systems, such as steam turbine systems or solar cells. History Henry Kolm constructed an elementary TPV system at MIT in 1956. However, Pierre Aigrain is widely cited as the inventor based on lectures he gave at MIT between 1960–1961 which, unlike Kolm's system, led to research and development. In the 1980s, efficiency reached close to 30%. In 1997 a prototype TPV hybrid car was built, the "Viking 29" (TPV) powered automobile, designed and built by the Vehicle Research Institute (VRI) at Western Washington University. In 2022, MIT/NREL announced a device with 41% efficiency. The absorber employed multiple III-V semiconductor layers tuned to absorb variously, ultraviolet, visible, and infrared photons. A gold reflector recycled unabsorbed photons. The device operated at 2400 °C, at which temperature the tungsten emitter reaches maximum brightness. In May 2024, researchers announced a device that achieved 44% efficiency when using silicon-carbide (SiC) as the heat storage material (emitter). At 1,435 °C (2,615 °F) the device radiates thermal photons at various energy levels. The semiconductor captures 20 to 30% of the photons. Additional layers include air and a gold reflector layer. Details Efficiency The upper limit for efficiency in TPVs (and all systems that convert heat energy to work) is the Carnot efficiency, that of an ideal heat engine. This efficiency is given by: where Tcell is the temperature of the PV converter. Practical systems can achieve Tcell= ~300 K and Temit= ~1800 K, giving a maximum possible efficiency of ~83%. This assumes the PV converts the radiation into electrical energy without losses, such as thermalization or Joule heating, though in reality the photovoltaic inefficiency is quite significant. In real devices, as of 2021, the maximum demonstrated efficiency in the laboratory was 35% with an emitter temperature of 1,773 K. This is the efficiency in terms of heat input being converted to electrical power. In complete TPV systems, a necessarily lower total system efficiency may be cited including the source of heat, so for example, fuel-based TPV systems may report efficiencies in terms of fuel-energy to electrical energy, in which case 5% is considered a "world record" level of efficiency. Real-world efficiencies are reduced by such effects as heat transfer losses, electrical conversion efficiency (TPV voltage outputs are often quite low), and losses due to active cooling of the PV cell. Emitters Deviations from perfect absorption and perfect black body behavior lead to light losses. For selective emitters, any light emitted at wavelengths not matched to the bandgap energy of the photovoltaic may not be efficiently converted, reducing efficiency. In particular, emissions associated with phonon resonances are difficult to avoid for wavelengths in the deep infrared, which cannot be practically converted. An ideal emitter would emit no light at wavelengths other than at the bandgap energy, and much TPV research is devoted to developing emitters that better approximate this narrow emission spectrum. Filters For black body emitters or imperfect selective emitters, filters reflect non-ideal wavelengths back to the emitter. These filters are imperfect. Any light that is absorbed or scattered and not redirected to the emitter or the converter is lost, generally as heat. Conversely, practical filters often reflect a small percentage of light in desired wavelength ranges. Both are inefficiencies. The absorption of suboptimal wavelengths by the photovoltaic device also contributes inefficiency and has the added effect of heating it, which also decreases efficiency. Converters Even for systems where only light of optimal wavelengths is passed to the photovoltaic converter, inefficiencies associated with non-radiative recombination and Ohmic losses exist. There are also losses from Fresnel reflections at the PV surface, optimal-wavelength light that passes through the cell unabsorbed, and the energy difference between higher-energy photons and the bandgap energy (though this tends to be less significant than with solar PVs). Non-radiative recombination losses tend to become less significant as the light intensity increases, while they increase with increasing temperature, so real systems must consider the intensity produced by a given design and operating temperature. Geometry In an ideal system, the emitter is surrounded by converters so no light is lost. Realistically, geometries must accommodate the input energy (fuel injection or input light) used to heat the emitter. Additionally, costs have prohibited surrounding the filter with converters. When the emitter reemits light, anything that does not travel to the converters is lost. Mirrors can be used to redirect some of this light back to the emitter; however, the mirrors may have their own losses. Black body radiation For black body emitters where photon recirculation is achieved via filters, Planck's law states that a black body emits light with a spectrum given by: where I′ is the light flux of a specific wavelength, λ, given in units of 1 m–3⋅s–1. h is the Planck constant, k is the Boltzmann constant, c is the speed of light, and Temit is the emitter temperature. Thus, the light flux with wavelengths in a specific range can be found by integrating over the range. The peak wavelength is determined by the temperature, Temit based on Wien's displacement law: where b is Wien's displacement constant. For most materials, the maximum temperature an emitter can stably operate at is about 1800 °C. This corresponds to an intensity that peaks at or an energy of ~0.75 eV. For more reasonable operating temperatures of 1200 °C, this drops to ~0.5 eV. These energies dictate the range of bandgaps that are needed for practical TPV converters (though the peak spectral power is slightly higher). Traditional PV materials such as Si (1.1 eV) and GaAs (1.4 eV) are substantially less practical for TPV systems, as the intensity of the black body spectrum is low at these energies for emitters at realistic temperatures. Active components and materials selection Emitters Efficiency, temperature resistance and cost are the three major factors for choosing a TPV emitter. Efficiency is determined by energy absorbed relative to incoming radiation. High temperature operation is crucial because efficiency increases with operating temperature. As emitter temperature increases, black-body radiation shifts to shorter wavelengths, allowing for more efficient absorption by photovoltaic cells. Polycrystalline silicon carbide Polycrystalline silicon carbide (SiC) is the most commonly used emitter for burner TPVs. SiC is thermally stable to ~1700 °C. However, SiC radiates much of its energy in the long wavelength regime, far lower in energy than even the narrowest bandgap photovoltaic. Such radiation is not converted into electrical energy. However, non-absorbing selective filters in front of the PV, or mirrors deposited on the back side of the PV can be used to reflect the long wavelengths back to the emitter, thereby recycling the unconverted energy. In addition, polycrystalline SiC is inexpensive. Tungsten Tungsten is the most common refractory metal that can be used as a selective emitter. It has higher emissivity in the visible and near-IR range of 0.45 to 0.47 and a low emissivity of 0.1 to 0.2 in the IR region. The emitter is usually in the shape of a cylinder with a sealed bottom, which can be considered a cavity. The emitter is attached to the back of a thermal absorber such as SiC and maintains the same temperature. Emission occurs in the visible and near IR range, which can be readily converted by the PV to electrical energy. However, compared to other metals, tungsten oxidizes more easily. Rare-earth oxides Rare-earth oxides such as ytterbium oxide (Yb2O3) and erbium oxide (Er2O3) are the most commonly used selective emitters. These oxides emit a narrow band of wavelengths in the near-infrared region, allowing the emission spectra to be tailored to better fit the absorbance characteristics of a particular PV material. The peak of the emission spectrum occurs at 1.29 eV for Yb2O3 and 0.827 eV for Er2O3. As a result, Yb2O3 can be used a selective emitter for silicon cells and Er2O3, for GaSb or InGaAs. However, the slight mismatch between the emission peaks and band gap of the absorber costs significant efficiency. Selective emission only becomes significant at 1100 °C and increases with temperature. Below 1700 °C, selective emission of rare-earth oxides is fairly low, further decreasing efficiency. Currently, 13% efficiency has been achieved with Yb2O3 and silicon PV cells. In general selective emitters have had limited success. More often filters are used with black body emitters to pass wavelengths matched to the bandgap of the PV and reflect mismatched wavelengths back to the emitter. Photonic crystals Photonic crystals allow precise control of electromagnetic wave properties. These materials give rise to the photonic bandgap (PBG). In the spectral range of the PBG, electromagnetic waves cannot propagate. Engineering these materials allows some ability to tailor their emission and absorption properties, allowing for more effective emitter design. Selective emitters with peaks at higher energy than the black body peak (for practical TPV temperatures) allow for wider bandgap converters. These converters are traditionally cheaper to manufacture and less temperature sensitive. Researchers at Sandia Labs predicted a high-efficiency (34% of light emitted converted to electricity) based on TPV emitter demonstrated using tungsten photonic crystals. However, manufacturing of these devices is difficult and not commercially feasible. Photovoltaic cells Silicon Early TPV work focused on the use of silicon. Silicon's commercial availability, low cost, scalability and ease of manufacture makes this material an appealing candidate. However, the relatively wide bandgap of Si (1.1eV) is not ideal for use with a black body emitter at lower operating temperatures. Calculations indicate that Si PVs are only feasible at temperatures much higher than 2000 K. No emitter has been demonstrated that can operate at these temperatures. These engineering difficulties led to the pursuit of lower-bandgap semiconductor PVs. Using selective radiators with Si PVs is still a possibility. Selective radiators would eliminate high and low energy photons, reducing heat generated. Ideally, selective radiators would emit no radiation beyond the band edge of the PV converter, increasing conversion efficiency significantly. No efficient TPVs have been realized using Si PVs. Germanium Early investigations into low bandgap semiconductors focused on germanium (Ge). Ge has a bandgap of 0.66 eV, allowing for conversion of a much higher fraction of incoming radiation. However, poor performance was observed due to the high effective electron mass of Ge. Compared to III-V semiconductors, Ge's high electron effective mass leads to a high density of states in the conduction band and therefore a high intrinsic carrier concentration. As a result, Ge diodes have fast decaying "dark" current and therefore, a low open-circuit voltage. In addition, surface passivation of germanium has proven difficult. Gallium antimonide The gallium antimonide (GaSb) PV cell, invented in 1989, is the basis of most PV cells in modern TPV systems. GaSb is a III-V semiconductor with the zinc blende crystal structure. The GaSb cell is a key development owing to its narrow bandgap of 0.72 eV. This allows GaSb to respond to light at longer wavelengths than silicon solar cell, enabling higher power densities in conjunction with manmade emission sources. A solar cell with 35% efficiency was demonstrated using a bilayer PV with GaAs and GaSb, setting the solar cell efficiency record. Manufacturing a GaSb PV cell is quite simple. Czochralski tellurium-doped n-type GaSb wafers are commercially available. Vapor-based zinc diffusion is carried out at elevated temperatures (~450 °C) to allow for p-type doping. Front and back electrical contacts are patterned using traditional photolithography techniques and an anti-reflective coating is deposited. Efficiencies are estimated at ~20% using a 1000 °C black body spectrum. The radiative limit for efficiency of the GaSb cell in this setup is 52%. Indium gallium arsenide antimonide Indium gallium arsenide antimonide (InGaAsSb) is a compound III-V semiconductor. (InxGa1−xAsySb1−y) The addition of GaAs allows for a narrower bandgap (0.5 to 0.6 eV), and therefore better absorption of long wavelengths. Specifically, the bandgap was engineered to 0.55 eV. With this bandgap, the compound achieved a photon-weighted internal quantum efficiency of 79% with a fill factor of 65% for a black body at 1100 °C. This was for a device grown on a GaSb substrate by organometallic vapour phase epitaxy (OMVPE). Devices have been grown by molecular beam epitaxy (MBE) and liquid phase epitaxy (LPE). The internal quantum efficiencies (IQE) of these devices approach 90%, while devices grown by the other two techniques exceed 95%. The largest problem with InGaAsSb cells is phase separation. Compositional inconsistencies throughout the device degrade its performance. When phase separation can be avoided, the IQE and fill factor of InGaAsSb approach theoretical limits in wavelength ranges near the bandgap energy. However, the Voc/Eg ratio is far from the ideal. Current methods to manufacture InGaAsSb PVs are expensive and not commercially viable. Indium gallium arsenide Indium gallium arsenide (InGaAs) is a compound III-V semiconductor. It can be applied in two ways for use in TPVs. When lattice-matched to an InP substrate, InGaAs has a bandgap of 0.74 eV, no better than GaSb. Devices of this configuration have been produced with a fill factor of 69% and an efficiency of 15%. However, to absorb higher wavelength photons, the bandgap may be engineered by changing the ratio of In to Ga. The range of bandgaps for this system is from about 0.4 to 1.4 eV. However, these different structures cause strain with the InP substrate. This can be controlled with graded layers of InGaAs with different compositions. This was done to develop of device with a quantum efficiency of 68% and a fill factor of 68%, grown by MBE. This device had a bandgap of 0.55 eV, achieved in the compound In0.68Ga0.33As. It is a well-developed material. InGaAs can be made to lattice match perfectly with Ge resulting in low defect densities. Ge as a substrate is a significant advantage over more expensive or harder-to-produce substrates. Indium phosphide arsenide antimonide The InPAsSb quaternary alloy has been grown by both OMVPE and LPE. When lattice-matched to InAs, it has a bandgap in the range 0.3–0.55 eV. The benefits of such a low band gap have not been studied in depth. Therefore, cells incorporating InPAsSb have not been optimized and do not yet have competitive performance. The longest spectral response from an InPAsSb cell studied was 4.3 μm with a maximum response at 3 μm. For this and other low-bandgap materials, high IQE for long wavelengths is hard to achieve due to an increase in Auger recombination. Lead tin selenide/Lead strontium selenide quantum wells PbSnSe/PbSrSe quantum well materials, which can be grown by MBE on silicon substrates, have been proposed for low cost TPV device fabrication. These IV-VI semiconductor materials can have bandgaps between 0.3 and 0.6 eV. Their symmetric band structure and lack of valence band degeneracy result in low Auger recombination rates, typically more than an order of magnitude smaller than those of comparable bandgap III-V semiconductor materials. Applications TPVs promise efficient and economically viable power systems for both military and commercial applications. Compared to traditional nonrenewable energy sources, burner TPVs have little NOx emissions and are virtually silent. Solar TPVs are a source of emission-free renewable energy. TPVs can be more efficient than PV systems owing to recycling of unabsorbed photons. However, losses at each energy conversion step lower efficiency. When TPVs are used with a burner source, they provide on-demand energy. As a result, energy storage may not be needed. In addition, owing to the PV's proximity to the radiative source, TPVs can generate current densities 300 times that of conventional PVs. Energy storage Man-portable power Battlefield dynamics require portable power. Conventional diesel generators are too heavy for use in the field. Scalability allows TPVs to be smaller and lighter than conventional generators. Also, TPVs have few emissions and are silent. Multifuel operation is another potential benefit. Investigations in the 1970s failed due to PV limitations. However, the GaSb photocell led to a renewed effort in the 1990s with improved results. In early 2001, JX Crystals delivered a TPV based battery charger to the US Army that produced 230 W fueled by propane. This prototype utilized an SiC emitter operating at 1250 °C and GaSb photocells and was approximately 0.5 m tall. The power source had an efficiency of 2.5%, calculated as the ratio of the power generated to the thermal energy of the fuel burned. This is too low for practical battlefield use. No portable TPV power sources have reached troop testing. Grid storage Converting spare electricity into heat for high-volume, long-term storage is under research at various companies, who claim that costs could be much lower than lithium-ion batteries. Graphite may be used as a storage medium, with molten tin as heat transfer, at temperatures around 2000°. See LaPotin, A., Schulte, K.L., Steiner, M.A. et al. Thermophotovoltaic efficiency of 40%. Nature 604, 287–291 (2022). Spacecraft Space power generation systems must provide consistent and reliable power without large amounts of fuel. As a result, solar and radioisotope fuels (extremely high power density and long lifetime) are ideal. TPVs have been proposed for each. In the case of solar energy, orbital spacecraft may be better locations for the large and potentially cumbersome concentrators required for practical TPVs. However, weight considerations and inefficiencies associated with the more complicated design of TPVs, protected conventional PVs continue to dominate. The output of isotopes is thermal energy. In the past thermoelectricity (direct thermal to electrical conversion with no moving parts) has been used because TPV efficiency is less than the ~10% of thermoelectric converters. Stirling engines have been deemed too unreliable, despite conversion efficiencies >20%. However, with the recent advances in small-bandgap PVs, TPVs are becoming more promising. A TPV radioisotope converter with 20% efficiency was demonstrated that uses a tungsten emitter heated to 1350 K, with tandem filters and a 0.6 eV bandgap InGaAs PV converter (cooled to room temperature). About 30% of the lost energy was due to the optical cavity and filters. The remainder was due to the efficiency of the PV converter. Low-temperature operation of the converter is critical to the efficiency of TPV. Heating PV converters increases their dark current, thereby reducing efficiency. The converter is heated by the radiation from the emitter. In terrestrial systems it is reasonable to dissipate this heat without using additional energy with a heat sink. However, space is an isolated system, where heat sinks are impractical. Therefore, it is critical to develop innovative solutions to efficiently remove that heat. Both represent substantial challenges. Commercial applications Off-grid generators TPVs can provide continuous power to off-grid homes. Traditional PVs do not provide power during winter months and nighttime, while TPVs can utilize alternative fuels to augment solar-only production. The greatest advantage for TPV generators is cogeneration of heat and power. In cold climates, it can function as both a heater/stove and a power generator. JX Crystals developed a prototype TPV heating stove/generator that burns natural gas and uses a SiC source emitter operating at 1250 °C and GaSb photocell to output 25,000 BTU/hr (7.3kW of heat) simultaneously generating 100W (1.4% efficiency). However, costs render it impractical. Combining a heater and a generator is called combined heat and power (CHP). Many TPV CHP scenarios have been theorized, but a study found that generator using boiling coolant was most cost efficient. The proposed CHP would utilize a SiC IR emitter operating at 1425 °C and GaSb photocells cooled by boiling coolant. The TPV CHP would output 85,000 BTU/hr (25kW of heat) and generate 1.5 kW. The estimated efficiency would be 12.3% (?)(1.5kW/25kW = 0.06 = 6%) requiring investment or 0.08 €/kWh assuming a 20 year lifetime. The estimated cost of other non-TPV CHPs are 0.12 €/kWh for gas engine CHP and 0.16 €/kWh for fuel cell CHP. This furnace was not commercialized because the market was not thought to be large enough. Recreational vehicles TPVs have been proposed for use in recreational vehicles. Their ability to use multiple fuel sources makes them interesting as more sustainable fuels emerge. TPVs silent operation allows them to replace noisy conventional generators (i.e. during "quiet hours" in national park campgrounds). However, the emitter temperatures required for practical efficiencies make TPVs on this scale unlikely. References External links 6th International Conference on Thermophotovoltaic Generation of Electricity NASA Radioisotope Power Conversion Technology NRA Overview New thermophotovoltaic materials could replace alternators in cars and save fuel Photovoltaics Thermodynamics
Thermophotovoltaic energy conversion
[ "Physics", "Chemistry", "Mathematics" ]
6,232
[ "Thermodynamics", "Dynamical systems" ]
2,937,841
https://en.wikipedia.org/wiki/Horizontal%20blanking%20interval
Horizontal blanking interval refers to a part of the process of displaying images on a computer monitor or television screen via raster scanning. CRT screens display images by moving beams of electrons very quickly across the screen. Once the beam of the monitor has reached the edge of the screen, it is switched off, and the deflection circuit voltages (or currents) are returned to the values they had for the other edge of the screen; this would have the effect of retracing the screen in the opposite direction, so the beam is turned off during this time. This part of the line display process is the Horizontal Blank. In detail, the Horizontal blanking interval consists of: front porch – blank while still moving right, past the end of the scanline, sync pulse – blank while rapidly moving left; in terms of amplitude, "blacker than black". back porch – blank while moving right again, before the start of the next scanline. Colorburst occurs during the back porch, and unblanking happens at the end of the back porch. In the NTSC television standard, horizontal blanking occupies out of every scan line (17.2%). In PAL, it occupies out of every scan line (18.8%). Some modern monitors and video cards support reduced blanking, standardized with Coordinated Video Timings. In the PAL television standard, the blanking level corresponds to the black level, whilst other standards, most notably some variants of NTSC, may set the black level slightly above the blanking level on a pedestal or "set up level". HBlank effects Some graphics systems can count horizontal blanks and change how the display is generated during this blank time in the signal; this is called a raster effect, of which an example is raster bars. In video games, the horizontal blanking interval was used to create some notable effects. Some methods of parallax scrolling use a raster effect to simulate depth in consoles that do not natively support multiple background layers or do not support enough background layers to achieve the desired effect. One example of this is in the game Castlevania: Rondo of Blood, which was written for the PC Engine CD-ROM which does not support multiple background layers. The Super Nintendo Entertainment System's Mode 7 uses the horizontal blanking interval to vary the scaling and rotation, per scan line, of one background layer to make the background appear to be a 3D plane. See also Nominal analogue blanking Vertical blanking interval References Video signal Television technology Television terminology
Horizontal blanking interval
[ "Technology" ]
518
[ "Information and communications technology", "Television technology" ]
2,938,012
https://en.wikipedia.org/wiki/Constraint-based%20Routing%20Label%20Distribution%20Protocol
Constraint-based Routing Label Distribution Protocol (CR-LDP) is a control protocol used in some computer networks. As of February 2003, the IETF MPLS working group deprecated CR-LDP and decided to focus purely on RSVP-TE. It is an extension of the Label Distribution Protocol (LDP), one of the protocols in the Multiprotocol Label Switching architecture. CR-LDP contains extensions for LDP to extend its capabilities such as setup paths beyond what is available for the routing protocol. For instance, a label-switched path can be set up based on explicit route constraints, quality of service constraints, and other constraints. Constraint-based routing (CR) is a mechanism used to meet traffic engineering requirements. These requirements are met by extending LDP for support of constraint-based routed label-switched paths (CR-LSPs). Other uses for CR-LSPs include MPLS-based virtual private networks. CR-LDP is almost same as basic LDP, in packet structure, but it contains some extra TLVs which basically set up the constraint-based LSP. References MPLS networking Network protocols
Constraint-based Routing Label Distribution Protocol
[ "Technology" ]
239
[ "Computing stubs", "Computer network stubs" ]
2,938,215
https://en.wikipedia.org/wiki/Course%20deviation%20indicator
A course deviation indicator (CDI) is an avionics instrument used in aircraft navigation to determine an aircraft's lateral position in relation to a course to or from a radio navigation beacon. If the location of the aircraft is to the left of this course, the needle deflects to the right, and vice versa. Use The indicator shows the direction to steer to correct for course deviations. Correction is made until the vertical needle centres, meaning the aircraft has intercepted the given course line. The pilot then steers to stay on that line. Only the receiver's current position determines the reading: the aircraft's heading, orientation, and track are not indicated. The deflection of the needle is proportional to the course deviation, but sensitivity and deflection vary depending on the system being used: When used with a VOR or VORTAC, the instrument can be referred to as an "omni bearing indicator" ("OBI"). The course line is selected by turning an "omni bearing selector" or "OBS" knob usually located in the lower left of the indicator. It then shows the number of degrees deviation between the aircraft's current position and the "radial" line emanating from the signal source at the given bearing. This can be used to find and follow the desired radial. Deflection is 10° deviation at full scale (each side), with each dot on the CDI representing 2°. (See Using a VOR for usage during flight.) When used with a GPS, or other RNAV equipment, it shows actual distance left or right of the programmed course line. Sensitivity is usually programmable or automatically switched, but deviation at full scale is typical for en route operations. Approach and terminal operations have a higher sensitivity up to frequently at full scale. In this mode, the OBS knob may or may not have an effect, depending on configuration. When used for instrument approaches using a LDA or ILS the OBS knob has no function because the course line is usually the runway heading, and is determined by the ground transmitter. A CDI might incorporate a horizontal needle to provide vertical guidance when used with a precision ILS approach where the glideslope is broadcast by another transmitter located on the ground. A CDI is not used with an automatic direction finder (ADF), which receives information from a normal AM radio station or an NDB. Operation The CDI was designed to interpret a signal from a VOR, LDA, or ILS receiver. These receivers output a signal composed of two AC voltages. When used with a VOR, a converter decodes this signal, and, by determining the desired heading or radial from a resolver connected to the OBS knob, provides a 150mV control signal to drive the CDI needle left or right. Most older units and some newer ones integrate a converter with the CDI. CDI units with an internal converter are not compatible with GPS units. More modern units are driven by a converter that is standalone or integrated with the radio. The resolver position is sent to the converter which outputs the control signal to drive the CDI. For digital units, the desired position of the needle is transmitted via a serial ARINC 429 signal from the radio or GPS unit, allowing the CDI design to be independent of the receiver and by multiple receiver types. See also Acronyms and abbreviations in avionics Horizontal situation indicator Index of aviation articles References External links Flash VOR type Course Deviation Indicator Simulator Aircraft instruments Avionics Radio navigation
Course deviation indicator
[ "Technology", "Engineering" ]
732
[ "Avionics", "Aircraft instruments", "Measuring instruments" ]
2,938,319
https://en.wikipedia.org/wiki/Ancestral%20graph
In statistics and Markov modeling, an ancestral graph is a type of mixed graph to provide a graphical representation for the result of marginalizing one or more vertices in a graphical model that takes the form of a directed acyclic graph. Definition Ancestral graphs are mixed graphs used with three kinds of edges: directed edges, drawn as an arrow from one vertex to another, bidirected edges, which have an arrowhead at both ends, and undirected edges, which have no arrowheads. It is required to satisfy some additional constraints: If there is an edge from a vertex u to another vertex v, with an arrowhead at v (that is, either an edge directed from u to v or a bidirected edge), then there does not exist a path from v to u consisting of undirected edges and/or directed edges oriented consistently with the path. If a vertex v is an endpoint of an undirected edge, then it is not also the endpoint of an edge with an arrowhead at v. Applications Ancestral graphs are used to depict conditional independence relations between variables in Markov models. References Extensions and generalizations of graphs Graphical models
Ancestral graph
[ "Mathematics" ]
238
[ "Graph theory stubs", "Mathematical relations", "Extensions and generalizations of graphs", "Graph theory" ]
2,938,409
https://en.wikipedia.org/wiki/Analog%20robot
An analog robot is a type of robot which uses analog circuitry to go toward a simple goal such as finding more light or responding to sound. The first real analog robot was invented in the 1940s by William Grey Walter. The name of these robots were Elmer and Elsie (ELectroMEchanical Robot, Light-Sensitive). The original circuitry was developed using two vacuum tubes and a photocell to search and follow a light. Recently a kind of analog robots was developed by Mark Tilden. Some modern analog robots are BEAM robots. Braitenberg Vehicles (described by Valentino Braitenberg) also are frequently analog, consisting of the output of sensors connected to motors without any form of signal processing. See also Analog computer External links BEAM community – A specific type of analog robot. KHM – Robots in academia. Analog Robotics – A research report. - 1940s in robotics
Analog robot
[ "Physics", "Technology" ]
178
[ "Physical systems", "Machines", "Robots" ]
2,938,458
https://en.wikipedia.org/wiki/Imipenem
Imipenem (trade name Primaxin among others) is a synthetic β-lactam antibiotic belonging to the carbapenems chemical class. developed by Merck scientists Burton Christensen, William Leanza, and Kenneth Wildonger in the mid-1970s. Carbapenems are highly resistant to the β-lactamase enzymes produced by many multiple drug-resistant Gram-negative bacteria, thus playing a key role in the treatment of infections not readily treated with other antibiotics. It is usually administered through intravenous injection. Imipenem was patented in 1975 and approved for medical use in 1985. It was developed via a lengthy trial-and-error search for a more stable version of the natural product thienamycin, which is produced by the bacterium Streptomyces cattleya. Thienamycin has antibacterial activity, but is unstable in aqueous solution, thus it is practically of no medicinal use. Imipenem has a broad spectrum of activity against aerobic and anaerobic, Gram-positive and Gram-negative bacteria. It is particularly important for its activity against Pseudomonas aeruginosa and Enterococcus species. However, it is not active against MRSA. Medical uses Spectrum of bacterial susceptibility and resistance Acinetobacter anitratus, Acinetobacter calcoaceticus, Actinomyces odontolyticus, Aeromonas hydrophila, Bacteroides distasonis, Bacteroides uniformis, and Clostridium perfringens are generally susceptible to imipenem, while Acinetobacter baumannii, some Acinetobacter spp., Bacteroides fragilis, and Enterococcus faecalis have developed resistance to imipenem to varying degrees. Not many species are resistant to imipenem except Pseudomonas aeruginosa (Oman) and Stenotrophomonas maltophilia. Coadministration with cilastatin Imipenem is rapidly degraded by the renal enzyme dehydropeptidase 1 when administered alone, and is almost always coadministered with cilastatin to prevent this inactivation. Adverse effects Common adverse drug reactions are nausea and vomiting. People who are allergic to penicillin and other β-lactam antibiotics should take caution if taking imipenem, as cross-reactivity rates are high. At high doses, imipenem is seizurogenic. Mechanism of action Imipenem acts as an antimicrobial through inhibiting cell wall synthesis of various Gram-positive and Gram-negative bacteria. It remains very stable in the presence of β-lactamase (both penicillinase and cephalosporinase) produced by some bacteria, and is a strong inhibitor of β-lactamases from some Gram-negative bacteria that are resistant to most β-lactam antibiotics. References Further reading External links Carbapenem antibiotics Enantiopure drugs Drugs developed by Merck & Co. GABAA receptor negative allosteric modulators
Imipenem
[ "Chemistry" ]
640
[ "Stereochemistry", "Enantiopure drugs" ]
2,938,464
https://en.wikipedia.org/wiki/Twitch%20gameplay
Twitch gameplay is a type of video gameplay scenario that tests a player's response time. Action games such as shooters, sports, multiplayer online battle arena, and fighting games often contain elements of twitch gameplay. For example, first-person shooters such as Counter-Strike and Call of Duty require quick reaction times for the players to shoot enemies, and fighting games such as Street Fighter and Mortal Kombat require quick reaction times to attack or counter an opponent. Other video game genres may also involve twitch gameplay. For example, the puzzle video game Tetris gradually speeds up as the player makes progress. Twitch gameplay keeps players actively engaged with quick feedback to their actions, as opposed to turn-based gaming that involves waiting for the outcome of a chosen course of action. Twitch can be used to expand tactical options and play by testing the skill of the player in various areas (usually reflexive responses) and generally add difficulty (relating to the intensity of "twitching" required). Fast chess, chess played with short time limits between moves, is an example of adding a twitch gameplay element to a turn-based game. Conversely, checkpoints and extra lives are common game mechanics in twitch gaming that attempt to reduce the penalty for errors in play, adding an element of turn-based gameplay. Traditionally, however, the term "twitch game" has been applied to simple arcade, console, and computer games that lack an element of strategy and are based solely upon a player's reaction time. History "Twitch" refers to the motion the player makes, a sudden movement or reaction to an event on the screen. An early use of the term was by Vern Raburn of Microsoft in 1981. Many early computer, arcade, and console games are considered to be "twitch games". They mostly involved "see and react" situations. For instance, Kaboom! had players rapidly catching bombs that a mad bomber threw from a rooftop. Most classic arcade games such as Space Invaders, Pac-Man, Defender and Robotron were also twitch-based. As games and their control inputs evolved, the games started to favor strategy over reaction, early turn-based games being the most prevalent examples. Such games required players to plan each move and anticipate the opponent's moves ahead of time. Not unlike chess, early strategy games focused on setting pieces and then moving them one turn at a time while the opponent did the same. Many of these games were based on tabletop games. The introduction of the internet, and its suitability as a playable medium, facilitated many of the features of today's mainstream games. Some strategy games however required fast reactions within gameplay. Soon after turn-based strategy games were introduced, real-time strategy games were introduced to the video gaming market, beginning with Herzog Zwei and then Dune II and eventually leading to popular titles such as Command & Conquer, Warcraft, and StarCraft. While strategy was still the primary objective, these games played out in real time. Players were required to have fast reactions to enemies' movements and attacks. Early first-person shooters were much like early games in general; fast reactions were required and little strategy or thought went into the gameplay. Even the youngest players were able to understand the concept, which may have been the reason such games became instantly popular among a large demographic. Many of the earliest first person games were considered cookie cutter copies of each other; Doom, Wolfenstein 3D, and many others looked, played, and felt the same, especially since many shooters were built off the Doom engine. Enemy AI was predictable and levels were simple mazes with secrets and side rooms. While some games included the ability to look up and down, it was rarely required of the player. Gameplay today Games have become more complex as technology has improved. Today, nearly every genre of video game contains some level of "twitch", though turn-based strategy games have remained roughly untouched by the phenomenon. First-person shooters remain the predominant genre to emphasize twitch gameplay. Some games include elements that take players back in nostalgic gameplay in the form of quick time events or QTEs. These events decide the fate of the player by displaying a keystroke or keystrokes (often referred to as combos or combinations) that the player must input quickly. While the concept is not new, the term is often attributed to Yu Suzuki, director of the game Shenmue, an adventure game that introduced QTEs as a way to keep players interested during extended cut scenes. Other games have since adopted this method (e.g. Resident Evil 4). Twitch shooter Twitch shooters such as Doom Eternal share many of the traits of twitch gameplay and are typically characterised by its fast-paced action but differentiates with other shooters due to a lack of cover system and a focus on strafing to avoid projectiles and attacks. Twitch shooters are often described as being more difficult than games of a similar genre due to a demand for skill and superior reaction times. References External links Instructional Technology Research Online Video game terminology
Twitch gameplay
[ "Technology" ]
1,016
[ "Computing terminology", "Video game terminology" ]
2,938,548
https://en.wikipedia.org/wiki/Nordazepam
Nordazepam (INN; marketed under brand names Nordaz, Stilny, Madar, Vegesan, and Calmday; also known as nordiazepam, desoxydemoxepam, and desmethyldiazepam) is a 1,4-benzodiazepine derivative. Like other benzodiazepine derivatives, it has amnesic, anticonvulsant, anxiolytic, muscle relaxant, and sedative properties. However, it is used primarily in the treatment of anxiety disorders. It is an active metabolite of diazepam, chlordiazepoxide, clorazepate, prazepam, pinazepam, and medazepam. Nordazepam is among the longest lasting (longest half-life) benzodiazepines, and its occurrence as a metabolite is responsible for most cumulative side-effects of its myriad of pro-drugs when they are used repeatedly at moderate-high doses; the nordazepam metabolite oxazepam is also active (and is a more potent, full BZD-site agonist), which contributes to nordazepam cumulative side-effects but occur too minutely to contribute to the cumulative side-effects of nordazepam pro-drugs (except when they are abused chronically in extremely supra-therapeutic doses). Side effects Common side effects of nordazepam include somnolence, which is more common in elderly patients and/or people on high-dose regimens. Hypotonia, which is much less common, is also associated with high doses and/or old age. Contraindications and special caution Benzodiazepines require special precaution if used in the elderly, during pregnancy, in children, alcohol- or drug-dependent individuals, and individuals with comorbid psychiatric disorders. As with many other drugs, changes in liver function associated with aging or diseases such as cirrhosis, may lead to impaired clearance of nordazepam. Pharmacology Nordazepam is a partial agonist at the GABAA receptor, which makes it less potent than other benzodiazepines, particularly in its amnesic and muscle-relaxing effects. Its elimination half life is between 36 and 200 hours, with wide variation among individuals; factors such as age and sex are known to impact it. The variation of reported half-lives are attributed to differences in nordazepam metabolism and that of its metabolites as nordazepam is hydroxylated to active metabolites such as oxazepam, before finally being glucuronidated and excreted in the urine. This can be attributed to extremely variable hepatic and renal metabolic functions among individuals depending upon a number of factors (including age, ethnicity, disease, and current or previous use/abuse of other drugs/medicines). Chemistry Nordazepam is similar to diazepam, except that the methyl group at the R1 position has been replaced with a hydrogen. Nordazepam can be synthesized with 2-amino-5-chlorobenzophenone and chloroacetyl chloride. Nordazepam itself can also be used in the synthesis of diazepam by methylating the R1 position using dimethyl sulfate. Pregnancy and nursing mothers Nordazepam, like other benzodiazepines, easily crosses the placental barrier, so the drug should not be administered during the first trimester of pregnancy. In case of serious medical reasons, nordazepam can be given in late pregnancy, but the fetus, due to the pharmacological action of the drug, may experience side effects such as hypothermia, hypotonia, and sometimes mild respiratory depression. Since nordazepam and other benzodiazepines are excreted in breast milk, the substance should not be administered to mothers who are breastfeeding. Discontinuing of breast-feeding is indicated for regular intake by the mother. Recreational use Nordazepam and other sedative-hypnotic drugs are detected frequently in cases of people suspected of driving under the influence of drugs. Many drivers have blood levels far exceeding the therapeutic dose range, suggesting benzodiazepines are commonly used in doses higher than the recommended doses. See also Benzodiazepine Benzodiazepine dependence Benzodiazepine withdrawal syndrome Long-term effects of benzodiazepines References External links Inchem - Nordazepam Benzodiazepines Lactams Chloroarenes Human drug metabolites
Nordazepam
[ "Chemistry" ]
970
[ "Chemicals in medicine", "Human drug metabolites" ]
2,938,583
https://en.wikipedia.org/wiki/List%20of%20auto%20parts
This is a list of auto parts, which are manufactured components of automobiles. This list reflects both fossil-fueled cars (using internal combustion engines) and electric vehicles; the list is not exhaustive. Many of these parts are also used on other motor vehicles such as trucks and buses. Car body and main parts Body components, including trim Doors Windows Low voltage/auxiliary electrical system and electronics Audio/video devices Cameras Low voltage electrical supply system Gauges and meters Ignition system Lighting and signaling system Sensors Starting system Electrical switches Wiring harnesses Miscellaneous Interior Floor components and parts Carpet and rubber and other floor material Center console (front and rear) Other components Roll cage or Exo cage Dash Panels Car seat Arm Rest Bench seat Bucket seat Children and baby car seat Fastener Headrest Seat belt Seat bracket Seat cover Seat track Other seat components Back seat Front seat Power-train and chassis Braking system Electrified powertrain components Engine components and parts Engine cooling system Engine oil systems Exhaust system Fuel supply system Suspension and steering systems Transmission system Miscellaneous auto parts Air conditioning system (A/C) Automobile air conditioning A/C Clutch A/C Compressor A/C Condenser A/C Hose high pressure A/C Kit A/C Relay A/C Valve A/C Expansion Valve A/C Low-pressure Valve A/C Schroeder Valve A/C Inner Plate A/C Cooler A/C Evaporator A/C Suction Hose Pipe A/C Discharge Hose Pipe A/C Gas Receiver A/C Condenser Filter A/C Cabin Filter (Pollen Filter) Bearings Grooved ball bearing Needle bearing Roller bearing Sleeve bearing Wheel bearing Hose Fuel vapour hose Reinforced hose (high-pressure hose) Non-reinforced hose Radiator hose Other miscellaneous parts Logo Adhesive tape and foil Air bag Bolt cap License plate bracket Cables Speedometer cable Cotter pin Dashboard Center console Glove compartment Drag link Dynamic seal Fastener Gasket: Flat, moulded, profiled Hood and trunk release cable Horn and trumpet horn Injection-molded parts Instrument cluster Label Mirror Phone Mount Name plate Nut Flange nut Hex nut O-ring Paint Rivet Rubber (extruded and molded) Screw Shim Sun visor Washer See also 42-volt electrical system Fuel economy in automobiles Spare parts management Electric Car References Parts Auto
List of auto parts
[ "Technology" ]
466
[ "Lists of parts", "Components" ]
2,938,620
https://en.wikipedia.org/wiki/Eucalyptol
Eucalyptol (also called cineole) is a monoterpenoid colorless liquid, and a bicyclic ether. It has a fresh camphor-like odor and a spicy, cooling taste. It is insoluble in water, but miscible with organic solvents. Eucalyptol makes up about 70–90% of eucalyptus oil. Eucalyptol forms crystalline adducts with hydrohalic acids, o-cresol, resorcinol, and phosphoric acid. Formation of these adducts is useful for purification. In 1870, F. S. Cloez identified and ascribed the name "eucalyptol" to the dominant portion of Eucalyptus globulus oil. Uses Because of its pleasant, spicy aroma and taste, eucalyptol is used in flavorings, fragrances, and cosmetics. Cineole-based eucalyptus oil is used as a flavoring at low levels (0.002%) in various products, including baked goods, confectionery, meat products, and beverages. In a 1994 report released by five top cigarette companies, eucalyptol was listed as one of the 599 additives to cigarettes. It is claimed to be added to improve the flavor. Eucalyptol is an ingredient in commercial mouthwashes, and has been used in traditional medicine as a cough suppressant. Other Eucalyptol exhibits insecticidal and insect repellent properties. In contrast, eucalyptol is one of many compounds that are attractive to males of various species of orchid bees, which gather the chemical to synthesize pheromones; it is commonly used as bait to attract and collect these bees for study. One such study with Euglossa imperialis, a nonsocial orchid bee species, has shown that the presence of cineole (also eucalyptol) elevates territorial behavior and specifically attracts the male bees. It was even observed that these males would periodically leave their territories to forage for chemicals such as cineole, thought to be important for attracting and mating with females, to synthesize pheromones. Toxicology Eucalyptol has a toxicity (LD50) of 2.48 grams per kg (rat). Ingestion in significant quantities is likely to cause headache and gastric distress, such as nausea and vomiting. Because of its low viscosity, it may directly enter the lungs if swallowed, or if subsequently vomited. Once in the lungs, it is difficult to remove and can cause delirium, convulsions, severe injury or death. Biosynthesis Eucalyptol is generated from geranyl pyrophosphate (GPP) which isomerizes to (S)-linalyl diphosphate (LPP). Ionization of the pyrophosphate, catalyzed by cineole synthase, produces eucalyptol. The process involves the intermediacy of alpha-terpinyl cation. Plants containing eucalyptol Aframomum corrorima Artemisia tridentata Cannabis Cinnamomum camphora, camphor laurel (50%) Eucalyptus globulus Eucalyptus largiflorens Eucalyptus salmonophloia Eucalyptus staigeriana Eucalyptus wandoo Hedychium coronarium, butterfly lily Helichrysum gymnocephalum Kaempferia galanga, galangal, (5.7%) S. officinalis subsp. lavandulifolia (syn. S. lavandulifolia), Spanish sage (13%) Salvia rosmarinus, rosemary Turnera diffusa, damiana Umbellularia californica, pepperwood (22.0%) Zingiber officinale, ginger See also Camphor Citral Eucalyptus oil Lavandula Menthol Mouthwash References Cooling flavors Monoterpenes Ethers
Eucalyptol
[ "Chemistry" ]
832
[ "Organic compounds", "Functional groups", "Ethers" ]
2,938,694
https://en.wikipedia.org/wiki/Kuramoto%20model
The Kuramoto model (or Kuramoto–Daido model), first proposed by , is a mathematical model used in describing synchronization. More specifically, it is a model for the behavior of a large set of coupled oscillators. Its formulation was motivated by the behavior of systems of chemical and biological oscillators, and it has found widespread applications in areas such as neuroscience and oscillating flame dynamics. Kuramoto was quite surprised when the behavior of some physical systems, namely coupled arrays of Josephson junctions, followed his model. The model makes several assumptions, including that there is weak coupling, that the oscillators are identical or nearly identical, and that interactions depend sinusoidally on the phase difference between each pair of objects. Definition In the most popular version of the Kuramoto model, each of the oscillators is considered to have its own intrinsic natural frequency , and each is coupled equally to all other oscillators. Surprisingly, this fully nonlinear model can be solved exactly in the limit of infinite oscillators, N→ ∞; alternatively, using self-consistency arguments one may obtain steady-state solutions of the order parameter. The most popular form of the model has the following governing equations: , where the system is composed of N limit-cycle oscillators, with phases and coupling constant K. Noise can be added to the system. In that case, the original equation is altered to , where is the fluctuation and a function of time. If the noise is considered to be white noise, then , with denoting the strength of noise. Transformation The transformation that allows this model to be solved exactly (at least in the N → ∞ limit) is as follows: Define the "order" parameters r and ψ as . Here r represents the phase-coherence of the population of oscillators and ψ indicates the average phase. Substituting in the equation gives . Thus the oscillators' equations are no longer explicitly coupled; instead the order parameters govern the behavior. A further transformation is usually done, to a rotating frame in which the statistical average of phases over all oscillators is zero (i.e. ). Finally, the governing equation becomes . Large N limit Now consider the case as N tends to infinity. Take the distribution of intrinsic natural frequencies as g(ω) (assumed normalized). Then assume that the density of oscillators at a given phase θ, with given natural frequency ω, at time t is . Normalization requires that The continuity equation for oscillator density will be where v is the drift velocity of the oscillators given by taking the infinite-N limit in the transformed governing equation, such that Finally, the definition of the order parameters must be rewritten for the continuum (infinite N) limit. must be replaced by its ensemble average (over all ) and the sum must be replaced by an integral, to give Solutions for the large N limit The incoherent state with all oscillators drifting randomly corresponds to the solution . In that case , and there is no coherence among the oscillators. They are uniformly distributed across all possible phases, and the population is in a statistical steady-state (although individual oscillators continue to change phase in accordance with their intrinsic ω). When coupling K is sufficiently strong, a fully synchronized solution is possible. In the fully synchronized state, all the oscillators share a common frequency, although their phases can be different. A solution for the case of partial synchronization yields a state in which only some oscillators (those near the ensemble's mean natural frequency) synchronize; other oscillators drift incoherently. Mathematically, the state has for locked oscillators, and for drifting oscillators. The cutoff occurs when . When is unimodal and symmetric, then a stable state solution for the system is As coupling increases, there is a critical value such that when , the long-term average of , but when , where is small, then . Small N cases When N is small, the solutions given above breaks down, as the continuum approximation cannot be used. The N=2 case is trivial. In the rotating frame , and so the system is described exactly by the angle between the two oscillators: . When , the angle cycles around the circle (that is, the fast oscillator keeps lapping around the slow oscillator). When , the angle falls into a stable attractor (that is, the two oscillators lock in phase). Similarly, the state space of the N=3 case is a 2-dimensional torus, and so the system evolves as a flow on the 2-torus, which cannot be chaotic. Chaos first occurs when N=4. For some settings of , the system has a strange attractor. Connection to Hamiltonian systems The dissipative Kuramoto model is contained in certain conservative Hamiltonian systems with Hamiltonian of the form After a canonical transformation to action-angle variables with actions and angles (phases) , exact Kuramoto dynamics emerges on invariant manifolds of constant . With the transformed Hamiltonian Hamilton's equation of motion become and So the manifold with is invariant because and the phase dynamics becomes the dynamics of the Kuramoto model (with the same coupling constants for ). The class of Hamiltonian systems characterizes certain quantum-classical systems including Bose–Einstein condensates. Variations of the models There are a number of types of variations that can be applied to the original model presented above. Some models change the topological structure, others allow for heterogeneous weights, and other changes are more related to models that are inspired by the Kuramoto model but do not have the same functional form. Variations of network topology Beside the original model, which has an all-to-all topology, a sufficiently dense complex network-like topology is amenable to the mean-field treatment used in the solution of the original model (see Transformation and Large N limit above for more info). Network topologies such as rings and coupled populations support chimera states. One also may ask for the behavior of models in which there are intrinsically local, like one-dimensional topologies which the chain and the ring are prototypical examples. In such topologies, in which the coupling is not scalable according to 1/N, it is not possible to apply the canonical mean-field approach, so one must rely upon case-by-case analysis, making use of symmetries whenever it is possible, which may give basis for abstraction of general principles of solutions. Uniform synchrony, waves and spirals can readily be observed in two-dimensional Kuramoto networks with diffusive local coupling. The stability of waves in these models can be determined analytically using the methods of Turing stability analysis. Uniform synchrony tends to be stable when the local coupling is everywhere positive whereas waves arise when the long-range connections are negative (inhibitory surround coupling). Waves and synchrony are connected by a topologically distinct branch of solutions known as ripple. These are low-amplitude spatially-periodic deviations that emerge from the uniform state (or the wave state) via a Hopf bifurcation. The existence of ripple solutions was predicted (but not observed) by Wiley, Strogatz and Girvan, who called them multi-twisted q-states. The topology on which the Kuramoto model is studied can be made adaptive by use of fitness model showing enhancement of synchronization and percolation in a self-organised way. A graph with the minimal degree at least will be connected nevertheless for a graph to synchronize a little more it is required for such case it is known that there is critical connectivity threshold such that any graph on nodes with minimum degree must globally synchronise.for large enough. The minimum maximum are known to lie between . Similarly it is known that Erdős-Rényi graphs with edge probability precisely as goes to infinity will be connected and it has been conjectured that this value is too the number at which these random graphs undergo synchronization which a 2022 preprint claims to have proved. Variations of network topology and network weights: from vehicle coordination to brain synchronization Some works in the control community have focused on the Kuramoto model on networks and with heterogeneous weights (i.e. the interconnection strength between any two oscillators can be arbitrary). The dynamics of this model reads as follows: where is a nonzero positive real number if oscillator is connected to oscillator . Such model allows for a more realistic study of, e.g., power grids, flocking, schooling, and vehicle coordination. In the work from Dörfler and colleagues, several theorems provide rigorous conditions for phase and frequency synchronization of this model. Further studies, motivated by experimental observations in neuroscience, focus on deriving analytical conditions for cluster synchronization of heterogeneous Kuramoto oscillators on arbitrary network topologies. Since the Kuramoto model seems to play a key role in assessing synchronization phenomena in the brain, theoretical conditions that support empirical findings may pave the way for a deeper understanding of neuronal synchronization phenomena. Variations of the phase interaction function Kuramoto approximated the phase interaction between any two oscillators by its first Fourier component, namely , where . Better approximations can be obtained by including higher-order Fourier components, , where parameters and must be estimated. For example, synchronization among a network of weakly-coupled Hodgkin–Huxley neurons can be replicated using coupled oscillators that retain the first four Fourier components of the interaction function. The introduction of higher-order phase interaction terms can also induce interesting dynamical phenomena such as partially synchronized states, heteroclinic cycles, and chaotic dynamics. Availability pyclustering library includes a Python and C++ implementation of the Kuramoto model and its modifications. Also the library consists of oscillatory networks (for cluster analysis, pattern recognition, graph coloring, image segmentation) that are based on the Kuramoto model and phase oscillator. See also Master stability function Oscillatory neural network Phase-locked loop Swarmalators References Exactly solvable models Lattice models Partial differential equations Articles containing video clips Nonlinear systems Synchronization Oscillation
Kuramoto model
[ "Physics", "Materials_science", "Mathematics", "Engineering" ]
2,184
[ "Telecommunications engineering", "Lattice models", "Computational physics", "Nonlinear systems", "Mechanics", "Condensed matter physics", "Oscillation", "Statistical mechanics", "Synchronization", "Dynamical systems" ]
2,938,800
https://en.wikipedia.org/wiki/Fresnel%20number
In optics, in particular scalar diffraction theory, the Fresnel number (), named after the physicist Augustin-Jean Fresnel, is a dimensionless number relating to the pattern a beam of light forms on a surface when projected through an aperture. Definition For an electromagnetic wave passing through an aperture and hitting a screen, the Fresnel number F is defined as where is the characteristic size (e.g. radius) of the aperture is the distance of the screen from the aperture is the incident wavelength. Conceptually, it is the number of half-period zones in the wavefront amplitude, counted from the center to the edge of the aperture, as seen from the observation point (the center of the imaging screen), where a half-period zone is defined so that the wavefront phase changes by when moving from one half-period zone to the next. An equivalent definition is that the Fresnel number is the difference, expressed in half-wavelengths, between the slant distance from the observation point to the edge of the aperture and the orthogonal distance from the observation point to the center of the aperture. Application The Fresnel number is a useful concept in physical optics. The Fresnel number establishes a coarse criterion to define the near and far field approximations. Essentially, if Fresnel number is small – less than roughly 1 – the beam is said to be in the far field. If Fresnel number is larger than 1, the beam is said to be near field. However this criterion does not depend on any actual measurement of the wavefront properties at the observation point. The angular spectrum method is an exact propagation method. It is applicable to all Fresnel numbers. A good approximation for the propagation in the near field is Fresnel diffraction. This approximation works well when at the observation point the distance to the aperture is bigger than the aperture size. This propagation regime corresponds to . Finally, once at the observation point the distance to the aperture is much bigger than the aperture size, propagation becomes well described by Fraunhofer diffraction. This propagation regime corresponds to . The reason why the angular spectrum method is not used in all cases, is that for large propagation distances it burdens a larger computation time than the other methods. Depending on the specific problem, any memory size of computers is too small to solve the problem. Gaussian pilot beam Another criterion called Gaussian pilot beam allowing to define far and near field conditions, consists to measure the actual wavefront surface curvature for an unaberrated system. In this case the wavefront is planar at the aperture position, when the beam is collimated, or at its focus when the beam is converging/diverging. In detail, within a certain distance from the aperture – the near field – the amount of wavefront curvature is low. Outside this distance – the far field – the amount of wavefront curvature is high. This concept applies equivalently close to the focus. This criterion, firstly described by G.N. Lawrence and now adopted in propagation codes like PROPER, allows one to determine the realm of application of near and far field approximations taking into account the actual wavefront surface shape at the observation point, to sample its phase without aliasing. This criterion is named Gaussian pilot beam and fixes the best propagation method (among angular spectrum, Fresnel and Fraunhofer diffraction) by looking at the behavior of a Gaussian beam piloted from the aperture position and the observation position. Near/far field approximations are fixed by the analytical calculation of the Gaussian beam Rayleigh length and by its comparison with the input/output propagation distance. If the ratio between input/output propagation distance and Rayleigh length returns the surface wavefront maintains itself nearly flat along its path, which means that no sampling rescaling is requested for the phase measurement. In this case the beam is said to be near field at the observation point and angular spectrum method is adopted for the propagation. On the contrary, once the ratio between input/output propagation distance and Gaussian pilot beam Rayleigh range yields the surface wavefront gets curvature along the path. In this case a rescaling of the sampling is mandatory for a measurement of the phase preventing aliasing. The beam is said to be far field at the observation point and Fresnel diffraction is adopted for the propagation. Fraunhofer diffraction returns then to be an asymptotic case that applies only when the input/output propagation distance is large enough to consider the quadratic phase term, within the Fresnel diffraction integral, negligible irrespectively to the actual curvature of the wavefront at the observation point. As the figures explain, the Gaussian pilot beam criterion allows describing the diffractive propagation for all the near/far field approximation cases set by the coarse criterion based on Fresnel number. See also Fraunhofer distance Fresnel diffraction Fresnel imager Fresnel integral Fresnel zone Near and far field Talbot effect Zone plate References Bibliography External links Coyote's Guide to IDL Programming Diffraction
Fresnel number
[ "Physics", "Chemistry", "Materials_science" ]
1,057
[ "Crystallography", "Diffraction", "Spectroscopy", "Spectrum (physical sciences)" ]
2,938,855
https://en.wikipedia.org/wiki/Cobalt-60
Cobalt-60 (Co) is a synthetic radioactive isotope of cobalt with a half-life of 5.2714 years. It is produced artificially in nuclear reactors. Deliberate industrial production depends on neutron activation of bulk samples of the monoisotopic and mononuclidic cobalt isotope . Measurable quantities are also produced as a by-product of typical nuclear power plant operation and may be detected externally when leaks occur. In the latter case (in the absence of added cobalt) the incidentally produced is largely the result of multiple stages of neutron activation of iron isotopes in the reactor's steel structures via the creation of its precursor. The simplest case of the latter would result from the activation of . undergoes beta decay to the stable isotope nickel-60 (). The activated cobalt nucleus emits two gamma rays with energies of 1.17 and 1.33 MeV, hence the overall equation of the nuclear reaction (activation and decay) is: + n → → + e + 2 γ Activity Given its half-life, the radioactive activity of a gram of Co is close to . The absorbed dose constant is related to the decay energy and time. For Co it is equal to 0.35 mSv/(GBq h) at one meter from the source. This allows calculation of the equivalent dose, which depends on distance and activity. For example, 2.8 GBq or 60 μg of Co, generates a dose of 1 mSv at 1 meter away, within an hour. The swallowing of Co reduces the distance to a few millimeters, and the same dose is achieved within seconds. Test sources, such as those used for school experiments, have an activity of <100 kBq. Devices for nondestructive material testing use sources with activities of 1 TBq and more. The high γ-energies correspond to a significant mass difference between Ni and Co: 0.003 u. This amounts to nearly 20 watts per gram, nearly 30 times larger than that of Pu. Decay The diagram shows a simplified decay scheme of Co and Co. The main β-decay transitions are shown. The probability for population of the middle energy level of 2.1 MeV by β-decay is 0.0022%, with a maximum energy of 665.26 keV. Energy transfers between the three levels generate six different gamma-ray frequencies. In the diagram the two important ones are marked. Internal conversion energies are well below the main energy levels. Co is a nuclear isomer of Co with a half-life of 10.467 minutes. It decays by internal transition to Co, emitting 58.6 keV gamma rays, or with a low probability (0.22%) by β-decay into Ni. Applications The main advantage of Co is that it is a high-intensity gamma-ray emitter with a relatively long half-life, 5.27 years, compared to other gamma ray sources of similar intensity. The β-decay energy is low and easily shielded; however, the gamma-ray emission lines have energies around 1.3 MeV, and are highly penetrating. The physical properties of cobalt such as resistance to bulk oxidation and low solubility in water give some advantages in safety in the case of a containment breach over some other gamma sources such as caesium-137. The main uses for Co are: As a tracer for cobalt in chemical reactions Sterilization of medical equipment. Radiation source for medical radiotherapy. Cobalt therapy, using beams of gamma rays from Co teletherapy machines to treat cancer. Radiation source for industrial radiography. Radiation source for leveling devices and thickness gauges. Radiation source for pest insect sterilization. As a radiation source for food irradiation and blood irradiation. Cobalt has been discussed as a "salting" element to add to nuclear weapons, to produce a cobalt bomb, an extremely "dirty" weapon which would contaminate large areas with Co nuclear fallout, rendering them uninhabitable. In one design, the tamper of the weapon would be made of Co. When the bomb explodes, neutrons from the nuclear fission would irradiate the cobalt and transmute it to Co. No country is known to have done any serious development of this type of weapon. Production Co does not occur naturally on Earth in significant amounts, so Co is synthesized by bombarding a Co target with a slow neutron source. Californium-252, moderated through water, can be used for this purpose, as can the neutron flux in a nuclear reactor. The CANDU reactors can be used to activate Co, by substituting the control rods with cobalt rods. In the United States, as of 2010, it is being produced in a boiling water reactor at Hope Creek Nuclear Generating Station. The cobalt targets are substituted here for a small number of fuel assemblies. Still, over 40% of all single-use medical devices are sterilized using from Bruce nuclear generating station. Co + n → Co Safety Exposure to Co is lethal for humans, and can cause death (potentially in less than an hour from acute exposure). After entering a living mammal (such as a human), assuming that the subject does not die shortly after exposure (as may happen in acute exposure incidents), some of the Co is excreted in feces. The rest is taken up by tissues, mainly the liver, kidneys, and bones, where the prolonged exposure to gamma radiation can cause cancer. Over time, the absorbed cobalt is eliminated in urine. Steel contamination Cobalt is found in steel. Uncontrolled disposal of Co in scrap metal is responsible for the radioactivity in some iron products. Circa 1983, construction was finished of 1700 apartments in Taiwan which were built with steel contaminated with cobalt-60. About 10,000 people occupied these buildings during a 9–20 year period. On average, these people unknowingly received a radiation dose of 0.4 Sv. Some studies have found that this large group did not suffer a higher incidence of cancer mortality, as the linear no-threshold model would predict, but suffered a lower cancer mortality than the general Taiwan public. These observations support the radiation hormesis model, however other studies have found health impacts that confound the results. In August 2012, Petco recalled several models of steel pet food bowls after US Customs and Border Protection determined that they were emitting low levels of radiation, which was determined to be from Co that had contaminated the steel. In May 2013, a batch of metal-studded belts sold by online retailer ASOS were confiscated and held in a US radioactive storage facility after testing positive for Co. Incidents involving medical radiation sources A radioactive contamination incident occurred in 1984 in Ciudad Juárez, Chihuahua, Mexico, originating from a radiation therapy unit illegally purchased by a private medical company and subsequently dismantled for lack of personnel to operate it. The radioactive material, Co, ended up in a junkyard, where it was sold to foundries that inadvertently smelted it with other metals and produced about 6,000 tons of contaminated rebar. These were distributed in 17 Mexican states and several cities in the United States. It is estimated that 4,000 people were exposed to radiation as a result of this incident. In the Samut Prakan radiation accident in 2000, a disused radiotherapy head containing a Co source was stored at an unsecured location in Bangkok, Thailand and then accidentally sold to scrap collectors. Unaware of the danger, a junkyard employee dismantled the head and extracted the source, which remained unprotected for a period of days at the junkyard. Ten people, including the scrap collectors and workers at the junkyard, were exposed to high levels of radiation and became ill. Three junkyard workers later died of their exposure, which was estimated to be over 6 Gy. Afterward, the source was safely recovered by Thai authorities. In December 2013, a truck carrying a disused 111 TBq Co teletherapy source from a hospital in Tijuana to a radioactive waste storage center was hijacked at a gas station near Mexico City. The truck was soon recovered, but the thieves had removed the source from its shielding. It was found intact in a nearby field. Despite early reports with lurid headlines asserting that the thieves were "likely doomed", the radiation sickness was mild enough that the suspects were quickly released to police custody, and no one is known to have died from the incident. Other incidents On 13 September 1999, six people tried to steal Co rods from a chemical plant in the city of Grozny, Chechen Republic. During the theft, the suspects opened the radioactive material container and handled it, resulting in the deaths of three of the suspects and injury of the remaining three. The suspect who held the material directly in his hands died of radiation exposure 30 minutes later. This incident is described as an attempted theft, but some of the rods are reportedly still missing. Parity In 1957, Chien-Shiung Wu et al. discovered that β-decay violated parity, implying nature has a handedness. In the Wu experiment, researchers aligned Co nuclei by cooling the source to low temperatures in a magnetic field. Wu's observation was that more β-rays were emitted in the opposite direction to the nuclear spin. This asymmetry violates parity conservation. Suppliers Argentina, Canada, India and Russia are the largest suppliers of Co in the world. Both Argentina and Canada have (as of 2022) an all-heavy-water reactor fleet for power generation. Canada has CANDU in numerous locations throughout Ontario as well as Point Lepreau Nuclear Generating Station in New Brunswick, while Argentina has two German-supplied heavy water reactors at Atucha nuclear power plant and a Canadian-built CANDU at Embalse Nuclear Power Station. India has CANDU reactors at the Rajasthan Atomic Power Station used for producing Co. India had a capacity of more than 6 MCi of Co production in 2021; this capacity is slated to increase with more CANDU reactors being commissioned at the Rajasthan Atomic Power Station. Heavy-water reactors are particularly well suited for production of Co because of their excellent neutron economy and because their capacity for online refueling allows targets to be inserted into the reactor core and removed after a predetermined time without the need for cold shutdown. Also, the heavy water used as a moderator is commonly held at lower temperatures than is the coolant in light water reactors, allowing for a lower speed of neutrons, which increases the neutron cross section and decreases the rate of unwanted (n,2n) "knockout" reactions. In popular culture Co is the material encasing a missile nuclear warhead in the 1970 film Beneath the Planet of the Apes. In an episode of 9-1-1 (TV series), a truck illegally transporting Co causes a hazardous emergency for a team of firefighters. See also Cobalt bomb Harold E. Johns References External links Cobalt-60, Centers for Disease Control and Prevention. NLM Hazardous Substances Databank – Cobalt, Radioactive Beta decay of Cobalt-60, HyperPhysics, Georgia State University. Isotopes of cobalt Radioactive contamination
Cobalt-60
[ "Chemistry", "Technology" ]
2,261
[ "Isotopes of cobalt", "Environmental impact of nuclear power", "Radioactive contamination", "Isotopes" ]
2,938,915
https://en.wikipedia.org/wiki/Functional%20group%20%28ecology%29
A functional group is a collection of organisms that share characteristics within a community. Ideally, these would perform equivalent tasks based on domain forces, rather than a common ancestor or evolutionary relationship. This could potentially lead to analogous structures that overrule the possibility of homology. More specifically, these beings produce resembling effects to external factors of an inhabiting system. Due to the fact that a majority of these creatures share an ecological niche, it is practical to assume they require similar structures in order to achieve the greatest amount of fitness. This refers to such as the ability to successfully reproduce to create offspring, and furthermore sustain life by avoiding predators and sharing meals. Scientific investigation Rather than being based in theory, functional groups are directly observed and determined by research specialists. It is important that this information is witnessed firsthand in order to state as usable evidence. Behavior and overall contribution to others are common key points to look for. Individuals use the corresponding perceived traits to further link genetic profiles to one another. Although the species themselves are different, variables based on overall function and performance are interchangeable. These groups share an indistinguishable part within their energy flow, providing a key position within food chains and relationships within environment(s). An ecosystem is the biological organization that defines and expands on various environment factors, abiotic and biotic, that relate to simultaneous interaction. Whether it be a producer or relative consumer, each and every piece of life maintains a critical position in the ongoing survival rates of its own surroundings. As it pertains, a functional group shares a very specific role within any given ecosystem and the process of cycling vitality. Categories There are generally two types of functional groups that range between flora and specific animal populations. Groups that relate to vegetation science, or flora, are known as plant functional types. Also referred to as PFT for short, these often share identical photosynthetic processes and require comparable nutrients. As an example, plants that undergo photosynthesis share an identical purpose in producing chemical energy for others. In contrast, those within the animal science range are called guilds typically sharing feeding types. This could be easily simplified when viewing trophic levels. Examples include primary consumers, secondary consumers, tertiary consumers, and quaternary consumers. Diversity Functional diversity is often referred to as the "value and the range of those species and organismal traits that influence ecosystem functioning”. Traits of an organism that make it unique may include the way it moves, gathers resources, or reproduces, or the time of year it is active add to the overall diversity of an entire ecosystem, and therefore enhance the overall function, or productivity, of that ecosystem. Functional diversity increases the overall productivity of an ecosystem by allowing for an increase in niche occupation. Species have evolved to be more diverse through each epoch of time, with plants and insects having some of the most diverse families discovered thus far. The unique traits of an organism can allow a new niche to be occupied, allow for better defense against predators, and potentially lead to specialization. Organismal level functional diversity, which adds to the overall functional diversity of an ecosystem, is important for conservation efforts, especially in systems used for human consumption. Functional diversity can be difficult to measure accurately, but when done correctly, it provides useful insight to the overall function and stability of an ecosystem. Redundancy Functional redundancy refers to the phenomenon that species in the same ecosystem fill similar roles, which results in a sort of "insurance" in the ecosystem. Redundant species can easily do the job of a similar species from the same functional niche. This is possible because similar species have adapted to fill the same niche overtime. Functional redundancy varies across ecosystems and can vary from year to year depending on multiple factors including habitat availability, overall species diversity, competition for resources, and anthropogenic influence. This variation can lead to a fluctuation in overall ecosystem production. It is not always known how many species occupy a functional niche, and how much, if any, redundancy is occurring in each niche in an ecosystem. It is hypothesized that each important functional niche is filled by multiple species. Similar to functional diversity, there is no one clear method for calculating functional redundancy accurately, which can be problematic. One method is to account for the number of species occupying a functional niche, as well as the abundance of each species. This can indicate how many total individuals in an ecosystem are performing one function. Effects on conservation Studies relating to functional diversity and redundancy occur in a large proportion of conservation and ecological research. As the human population increases, the need for ecosystem function subsequently increases. In addition, habitat destruction and modification continue to increase, and suitable habitat for many species continues to decrease, this research becomes more important. As the human population continues to expand and become urbanized, native and natural landscapes are disappearing, being replaced with modified and managed land for human consumption. Alterations to landscapes are often accompanied with negative side effects including fragmentation, species losses, and nutrient runoff, which can effect the stability of an ecosystem, productivity of an ecosystem, and the functional diversity and functional redundancy by decreasing species diversity. It has been shown that intense land use affects both the species diversity and functional overlap, leaving the ecosystem and organisms in it vulnerable. Specifically, bee species, which we rely on for pollination services, have both lower functional diversity and species diversity in managed landscapes when compared to natural habitats, indicating that anthropogenic change can be detrimental for organismal functional diversity, and therefore overall ecosystem functional diversity. Additional research demonstrated that the functional redundancy of herbivorous insects in streams varies due to stream velocity, demonstrating that environmental factors can alter functional overlap. When conservation efforts begin, it is still up for debate whether preserving specific species or functional traits is a more beneficial approach for the preservation of ecosystem function. Higher species diversity can lead to an increase in overall ecosystem productivity, but does not necessarily insure the security of functional overlap. In ecosystems with high redundancy, losing a species (which lowers overall functional diversity) will not always lower overall ecosystem function due to high functional overlap, and thus in this instance it is most important to conserve a group, rather than an individual. In ecosystems with dominant species, which contribute to a majority of the biomass output, it may be more beneficial to conserve this single species, rather than a functional group. The ecological concept of keystone species was redefined based on the presence of species with non redundant trophic dynamics with measured biomass dominance within functional groups, which highlights the conservation benefits of protecting both species and their respective functional group. Challenge Understanding functional diversity and redundancy, and the roles each play in conservation efforts, is often hard to accomplish because the tools with which we measure diversity and redundancy cannot be used interchangeably. Due to this, recent empirical work most often analyzes the effects of either functional diversity or functional redundancy, but not both. This does not create a complete picture of the factors influencing ecosystem production. In ecosystems with similar and diverse vegetation, functional diversity is more important for overall ecosystem stability and productivity. Yet, in contrast, functional diversity of native bee species in highly managed landscapes provided evidence for higher functional redundancy leading to higher fruit production, something humans rely heavily on for food consumption. A recent paper has stated that until a more accurate measuring technique is universally used, it is too early to determine which species, or functional groups, are most vulnerable and susceptible to extinction. Overall, understanding how extinction affects ecosystems, and which traits are most vulnerable, can protect ecosystems as a whole. See also Guild (ecology) References Ecology
Functional group (ecology)
[ "Biology" ]
1,543
[ "Ecology" ]
2,939,171
https://en.wikipedia.org/wiki/Bile%20bear
Bile bears, sometimes called battery bears, are bears kept in captivity to harvest their bile, a digestive fluid produced by the liver and stored in the gallbladder, which is used by some traditional Asian medicine practitioners. It is estimated that 12,000 bears are farmed for bile in China, South Korea, Laos, Vietnam, and Myanmar. Demand for the bile has been found in those nations as well as in some others, such as Malaysia and Japan. The bear species most commonly farmed for bile is the Asiatic black bear (Ursus thibetanus), although the sun bear (Helarctos malayanus), brown bear (Ursus arctos) and every other East Asian bear species are also used (the only exception being the giant panda which does not produce UDCA). Both the Asiatic black bear and the sun bear are listed as Vulnerable on the Red List of Threatened Animals published by the International Union for Conservation of Nature. They were previously hunted for bile, but factory farming has become common since hunting was banned in the 1980s. The bile can be harvested using several techniques, all of which require some degree of surgery, and may leave a permanent fistula or inserted catheter. A significant proportion of the bears die because of the stress of unskilled surgery or the infections which may occur. Farmed bile bears are housed continuously in small cages which often prevent them from standing or sitting upright, or from turning around. These highly restrictive cage systems and the low level of skilled husbandry can lead to a wide range of welfare concerns including physical injuries, pain, severe mental stress and muscle atrophy. Some bears are caught as cubs and may be kept in these conditions for up to 30 years. The value of the bear products trade is estimated as high as $2 billion. The practice of factory farming bears for bile has been extensively condemned, including by Chinese physicians. History Bear bile and gallbladders, which store bile, are ingredients in traditional Chinese medicine (TCM). Its first recorded use is found in (Newly Revised Materia Medica, Tang dynasty, 659 CE). The pharmacologically active ingredient contained in bear bile and gallbladders is ursodeoxycholic acid (UDCA); bears are the only mammals to produce significant amounts of UDCA. Initially, bile was collected from wild bears which were killed and the gall and its contents cut from the body. In the early 1980s, methods of extracting bile from live bears were developed in North Korea and farming of bile bears began. This rapidly spread to China and other regions. Bile bear farms started to reduce hunting of wild bears, with the hope that if bear farms raised a self-sustaining population of productive animals, poachers would have little motivation to capture or kill bears in the wild. The demand for bile and gallbladders exists in Asian communities throughout the world, including the European Union and the United States. This demand has led to bears being hunted in the US specifically for this purpose. Methods of bile extraction Several methods can be used to extract the bile. These all require surgery and include: Repeated percutaneous biliary drainage uses an ultrasound imager to locate the gallbladder, which is then punctured and the bile extracted. Permanent implantation uses a tube entered into the gallbladder through the abdomen. According to the Humane Society of the United States (HSUS), the bile is usually extracted twice a day through such implanted tubes, producing 10–20 ml of bile during each extraction. Catheterization involves pushing a steel or perspex catheter through the bear's abdomen and into the gallbladder. The full-jacket method uses a permanent catheter tube to extract the bile which is then collected in a plastic bag set in a metal box worn by the bear. The free drip method involves making a permanent hole, or fistula, in the bear's abdomen and gallbladder, from which bile freely drips out. The wound is vulnerable to infection, and bile can leak back into the abdomen, causing high mortality rates. Sometimes, the hole is kept open with a perspex catheter, which HSUS writes causes severe pain. An AAF Vet Report states that surgeries to create free-dripping fistulae caused bears great suffering as they were performed without appropriate antibiotics or pain management and the bears were repeatedly exposed to this process as the fistulae often healed over. Removal of the whole gallbladder is sometimes used. This method is used when wild bears are killed for their bile. It has been estimated that 50–60 per cent of bears die from complications caused by the surgery or improper post-surgical care. Housing and husbandry Cubs are sometimes caught in the wild and used to supplement numbers held captive in farms. In 2008, it was reported that bear farms were paying the equivalent of US$280 to US$400 for a wild bear cub. Bile extraction begins at three years-of-age and continues for a minimum of five to ten years. Some bears may be kept in cages for bile extraction for 20 years or more. A bear can produce 2.2 kg of bile over a 5-year production life. When the bears outlive their productive bile-producing years (around 10 years old), they are often slaughtered and harvested for their other body parts such as meat, fur, paws and gallbladders; bear paws are considered a delicacy. To facilitate the bile extraction process, mature bears are usually kept in small cages measuring approximately 130 x 70 x 60 cm. These cages are so small they prevent the bears from being able to sit upright, stand or turn around. Some bears are kept in crush cages, the sides of which can be moved inwards to restrain the bear. The HSUS reports that some bears are moved to a crush cage for milking, but the remainder of the time live in a cage large enough to stand and turn around. Bile bears are often subjected to other procedures which have their own concomitant ethical and welfare concerns. These include declawing in which the third phalanx of each front digit is amputated to prevent the bears from self-mutilating or harming the farm workers. They may also have their hind teeth removed for the same reasons. These procedures are often conducted by unskilled farm staff and may result in the bears experiencing constant pain thereafter. Pathology reports have shown that bile from sick bears is often contaminated with blood, pus, faeces, urine, bacteria and cancer cells. Welfare concerns International concern about the welfare of bile bears began in 1993. Many bile bear farms have little or no veterinary supervision and the animal husbandry is often conducted by non-skilled attendants. In combination with the impacts of small cage sizes, their spacing and lack of internal structures, there are several indicators of poor welfare. Physiological indicators Elevated corticosteroid concentrations are a widely acknowledged indicator of physiological stress. Corticosteroid concentrations in the hair of Asiatic black bears relocated from a bile farm to a bear rescue centre fell between 12 and 88% over 163 days. Other physiological indicators of stress and potentially reduced welfare include growth retardation and ulcers. A 2000 survey revealed that bile bears suffered from sores, skin conditions, ectoparasites, hair loss, bone deformities, injuries, swollen limbs, dental and breathing problems, diarrhoea and scarring. One survey of 165 bears removed from a farm showed that (out of 181 free-drip bears), 163 (90%) had cholecystitis, 109 (66%) had gallbladder polyps, 56 (34%) had abdominal herniation, 46 (28%) had internal abscessation, 36 (22%) had gallstones, and 7 (4%) had peritonitis. Many of the bears had a combination of these conditions. Behavioural indicators Academic sources have reported that bile bears exhibit abnormal behaviours such as stereotypies, lethargy, anxiety, and self-mutilation. The Chinese media reported a story in which a mother bear, having escaped her cage, strangled her own cub and then killed herself by intentionally running into a wall. Animals Asia attempted to verify the story after it went viral, and concluded it likely did not happen. Longevity and mortality Farmed bile bears live to an average age of five years old whereas healthy captive bears can live up to 35 years of age and wild bears for between 25 and 30 years. Legislation China In 1994, Chinese authorities announced that no new bear farms would be licensed and in 1996, issued a special notice stating that no foreign object was allowed to be inserted into a bear body. No bears younger than 3 years of age and lighter than 100 kg were to be used for bile extraction, and bears could be confined in cages only during the time of bile extraction. The authorities required the adoption of the free-drip method which necessitates the creation of an artificial fistula between the gallbladder and the abdominal wall by opening a cut into the gallbladder. In 2006, the Chinese State Council Information Office said that it was enforcing a "Technical Code of Practice for Raising Black Bears", which "requires hygienic, painless practice for gall extraction and make strict regulations on the techniques and conditions for nursing, exercise and propagation." However, a 2007 veterinary report published by the Animals Asia Foundation (AAF) stated that the Technical Code was not being enforced and that many bears were still spending their entire lives in small extraction cages without free access to food or water. The report also noted that the free-dripping technique promoted in the Technical Code was unsanitary as the fistula was an open portal through which bacteria could infiltrate the abdomen. The report also stated that surgeries to create free-dripping fistulae caused bears great suffering as they were performed without appropriate antibiotics or pain management and the bears were repeatedly exposed to this process as the fistulae often healed over. The free-dripping method still requires the bears to be prodded with a metal rod when the wound heals over and under veterinary examination, some bears with free-dripping fistulae were actually found to have clear perspex catheters permanently implanted into their gallbladders. In addition to the suffering caused by infection and pain at the incision site, 28% of fistulated bears also experience abdominal hernias and more than one-third eventually succumb to liver cancer, believed to be associated with the bile-extraction process. South Korea In South Korea, bear farming was declared illegal in 1992. However, it was reported in 2008 that over 1,300 bears were still on 108 farms where farmers were hoping that legal farming would resume. , wild bears over the age of ten could still be legally killed for their gallbladders in South Korea. Vietnam The Vietnamese government in 2005 made it illegal to extract bear bile and made a commitment to phase out bear farming. In 2008, there were still 3,410 bears on farms in Vietnam. Alternative sources There are two alternative sources for bile from farmed bears, i.e. wild bears and synthetic sources. Wild bears Bile from farmed bears is considered inferior to bile and gall from wild bears. Implications for conservation Officially, 7,600 captive bears are farmed in China. According to Chinese officials, 10,000 wild bears would need to be killed each year to produce as much bile. Government officials see farming as a reasonable answer to the loss of wild bears from poaching and are insouciant about animal welfare concerns. However, the government's agreement to allow the rescue of 500 bears may represent a softening of this stance. Earthtrust reported in 1994 that demand for bear parts from South Korea and Taiwan was one of the greatest threats to bear populations worldwide. Due to partnerships between some South Korean travel companies and South Korean bear farmers, 'bear bile tourism' from the country has reportedly helped fuel the farming industry in China and Vietnam. World Animal Protection reported in 2002 that Japan's demand for bear bile remained at a level of at least 200 kg per year which in theory would require 10,000 bears killed to be satisfied. Legislation and enforcement for bear products in the country has also been lacking. A 2015 report indicated that the illegal trade in bear bile and gallbladder for traditional medicine is open and widespread across Malaysia and is potentially a serious threat to wild bears. In a survey of 365 traditional medicine shops across Malaysia, 175 (48%) claimed to be selling bear gallbladders and medicinal products containing bear bile. Some supporters of bile bear farms argue, "Wildlife farming offers, at first glance, an intuitively satisfying solution: a legal trade can in principle be created by farming animals to assuage demand for wild animals which thus need not be harvested." Nonetheless, bears continue to be hunted in the wild to supply the bile farms. A survey in 2000 reported that almost all of the farms in the study supplemented their captive population of bile bears with wild-caught bears. This is claimed to be necessary because of difficulties with captive breeding. Consumers of bear bile have a strong preference for bile produced from wild bears; bile from farms may, therefore, not be a perfect substitute for bile from wild bears. Bear farming in Laos may be increasing the incentive to poach wild bears. A review of multiple types of wildlife tourist attractions concluded that bile bear farms had negative impacts on both animal welfare and conservation. Poaching in the United States In the late 1980s, U.S. park rangers began finding bear carcasses missing only gallbladders and paws. Initially, it was considered that occasional hunters were the cause, but investigations uncovered evidence that large commercial organizations were dealing in poaching and smuggling. During a three-year operation (Operation SOUP) ending in 1999, 52 people were arrested and 300 gallbladders seized in Virginia. Another investigation in Oregon led police to bring racketeering charges against an organisation that poached an estimated 50 to 100 bears per year for a decade. It was estimated in 2008 that in North America, 40,000 American black bears are illegally poached for their gallbladders and paws each year. Synthetic sources The pharmacologically active ingredient contained in bear bile is ursodeoxycholic acid (UDCA). This can be synthesized using cow or pig bile, or even using no animal ingredients. The generic drug name is Ursodiol and it is now being widely produced under brand names such as Actigall, Urso, Ursofalk, Ursogal and Ursotan. It was estimated in 2008 that 100,000 kg of synthetic UDCA was already being used each year in China, Japan and South Korea, and that the total world consumption may be double this figure. However, many traditional doctors still consider natural (but farmed) UDCA a superior product. In Japan, UDCA has been synthesised from cow galls, as a by-product of the meat industry, since 1955. It is also produced in the U.S. by Ciba-Geigy. In 2014, Kaibao Pharmaceuticals, which supplies approximately half of the bear bile (as dry powder) consumed in China, stated it is developing another synthetic source derived from poultry bile. The goal is to more closely recreate the chemistry of bear bile powder, so that the alternative is deemed appropriate by traditional Chinese medicine practitioners. In May 2024, Kaibao reported to investors that the approval process for its "artificial bear bile powder" has been delayed by recent law changes, in that additional clinical trials need to be approved and executed. Statistics Wild population The world population of Asiatic black bears decreased by 30% to 49% between 1980 and 2010. Although their reliability is unclear, range-wide estimates of 5–6,000 bears have been presented by Russian biologists. Rough density estimates without corroborating methods or data have been made in India and Pakistan, resulting in estimates of 7–9,000 in India and 1,000 in Pakistan. Unsubstantiated estimates from China give varying estimates between 15,000 and 46,000, with a government estimate of 28,000. Some estimates put the current (2015) total Asian worldwide population as low as 25,000. Farmed population The World Society for the Protection of Animals has been reported in 2011 as saying that more than 12,000 bears are currently estimated to be housed in both illegal and legal bear farms across Asia. China World Animal Protection conducted a study in 1999 and 2000, and estimated that 247 bear bile farms in China were holding 7,002 bears, though the Chinese government called the figures "pure speculation." The Chinese consider bear farms a way to reduce the demand on the wild bear population. Officially, 7,600 captive bears are farmed in China. According to Chinese officials, 10,000 wild bears would need to be killed each year to produce as much bile. The government sees farming as a reasonable answer to the loss of wild bears from poaching. However, the government's agreement to allow the rescue of 500 bears may represent a softening of this stance. China has been found to be the main source of bear bile products on sale throughout South-East Asia; this international trade in their parts and derivatives is strictly prohibited by the Convention on International Trade in Endangered Species of Wild Fauna and Flora. In 2010, there were approximately 97 establishments in China keeping bile bears. This was a decrease from the mid-1990s, where Xinhua reported of 480 bear farms in the country. In 2013, estimates of bears kept in cages in China for bile production range from 9,000 to 20,000 bears on nearly 100 domestic bear farms. One company (Fujian Guizhen Tang Pharmaceutical Co. Ltd) alone has more than 400 black bears to supply bile using the free drip method. The bile is harvested twice a day to collect a total of approximately 130 ml from each bear per day. Before the existence of bear farms (i.e. pre-1980) the demand for bear bile in China was about 500 kilos annually. In 2008, the demand had risen to about 4,000 kilos annually. South Korea According to the South Korean Environment Ministry, 1,374 bears were raised at 74 farms across South Korea as of 2009. In South Korea, it is legal to keep bears for bile and bears older than 10 years old can be harvested for their paws and organs. In 2022, the number of bears on South Korean farms had declined to 322 animals in 20 farms. Project Moon Bear, a South Korean nongovernmental organization (NGO) has been campaigning to end bear farming in the country. In 2022, the South Korean government, associations of bear farmers, and NGOs announced a joint declaration to end bear farming by 2026. Laos In Laos, the first farm was established in 2000. The number of farmed bears tripled from 2008 to 2012. In 2012, there were 121 Asiatic black bears and one sun bear on 11 commercial facilities. It is possible that all the bears were wild-caught domestically, or illegally imported internationally. This is in violation of both National and International law. In Laos in 2011, bear bile was selling for 120,000 kip (US$15) per ml, half the average monthly wage of 240,000 kip. A 2019 survey of locals in Luang Prabang found that although attitudes towards bears and conservation were generally positive, awareness of cruelty in the bear farms was lacking, with 43.7% of respondents regarding bile consumption sourced from bear farms as acceptable. Vietnam A 2019 survey published by the Claremont Colleges of 206 older Northern Vietnamese locals found that roughly 44% of respondents knew at least one other person who had used bear products from farms in the past year, compared to 55.33% of those who did not. 8.72% of respondents listed protecting bears as one of the reasons a person would not use such products, compared to roughly 21% who listed quality or preferring different medicine as reasons. Bile products The monetary value of the bile comes from the traditional prescription of bear bile by doctors practicing traditional medicine. Bear bile contains ursodeoxycholic acid. It is purchased and consumed to treat hemorrhoids, sore throats, sores, bruising, muscle ailments, sprains, epilepsy, reduce fever, improve eyesight, break down gallstones, act as an anti-inflammatory, reduce the effects of overconsumption of alcohol, and to 'clear' the liver. It is currently found in various forms for sale including whole gallbladders, raw bile, pills, powder, flakes, and ointment. Because only minute amounts of bile are used in TCM, a total of 500 kg of bear bile is used by practitioners every year, but according to WSPA, more than 7,000 kg are being produced. The surplus has reportedly been used as ingredients in beauty products and non-traditional health tonics. China's National Health Commission drew criticism in 2020 after reportedly recommending 'Tan Re Qing', a traditional medicine which contains the bile, to treat severe cases of COVID-19. Some South Korean bear bile farmers in the same year advertised that their products could also help with the coronavirus, drawing criticism from local animal rights groups. Efficacy Scientific studies have found components of bear bile to have some anti-inflammatory, anti-microbial, or hepatoprotective effects. The active ingredient in bear bile is ursodeoxycholic acid. Ursodeoxycholic acid has been shown to exert anti-inflammatory and protective effects in human epithelial cells of the gastrointestinal tract. It has been linked to regulation of immunoregulatory responses by regulation of cytokines, antimicrobial peptides defensins, and take an active part in increased restitution of wound in the colon. Moreover, UDCA's effects has been shown to have exert actions outside the epithelial cells. Bear bile has been shown in studies to be able to get rid of gallstones or dissolve them in the gallbladder. Due to controversy around the use of bear farming to obtain bile, synthetic sources for ursodeoxycholic acid are currently being worked on and investigated. Scientists in China have been working on synthetic forms of bile products, so that scientists need not use animal sources for bile. In this way, it is hoped that in the future, bile can be created in methods that do not involve animal cruelty. Cost In 2011, the overall worldwide trade in bear parts, including bile, was estimated to be a $2 billion industry. Gallbladder In 1970, 1 kg of bear gallbladder cost approximately US$200, but by 1990 the price had risen to between US$3,000 and US$5,000 per kg. In 2009, the market price for legally sold gallbladders in Hong Kong had risen to between US$30,000 and US$50,000 per kg. It was reported in 1991 that bear gallbladders and the like could sell in Seoul "for 10 times their price in China", with prices for one ranging from US$700 to US$3,292. In 2002, the pricier bear galls in Japan were reported to be selling for as much as US$83 per gram, and were either sourced domestically, or from Tibet or China. A report published in 2013 stated that a poacher in North America can usually get US$100 to $150 for a gallbladder, but the organs can fetch $5,000 to $10,000 in the end-market once they are processed into a powder. The report also stated that the HSUS indicated a bear gallbladder can cost more than $3,000 in Asia. A TRAFFIC report estimated that prices for whole gallbladders were as low as $51.11 (Myanmar) and as high as $2,000 (Hong Kong SAR). For gallbladder by the gram, the least expensive was $0.11 per gram (Thailand) and the highest was $109.70 per gram (Japan). Raw bile and bile powder Raw bile can sell for as much as US$24,000 per kg, about half the price of gold. There is huge profitability in the trade of bile powder. In 2007, while the wholesale price of bile powder was approximately US$410 per kg in China, the retail price increased from 25 to 50 fold in South Korea, and to 80 fold in Japan, i.e. US$33,000 per kg. Pills Pill prices ranged from as low as $0.38 per pill (Malaysia) to $3.83 per pill (Thailand). and in the U.S., approximately $1 per pill, which is an average price between the two countries. Businesses In 2010, the Guizhentang Pharmaceutical company was one of the most successful bile extraction companies in China, paying some 10 million yuan in taxes. In 2012, the company tried to go public in the Shenzhen stock exchange and proposed to triple the company's stock of captive bears, from 400 to 1,200. This provoked a large backlash from activists, internet users and protesters. It was followed by a number of controversies along with public interviews. The company responded with demonstrations of the extraction process where the bears seemed unconcerned by the procedure, in an attempt to counter the allegations its business was cruel. See also Animals Asia Foundation Free the Bears Fund Snake wine, a rice wine made with snake bile References "Torment of the moon bears" by Pat Sinclair, The Guardian, October 11, 2005, retrieved October 18, 2005 Chinese government attends official opening of Animals Asia's Moon bear rescue centre ..." Animals Asia Foundation press release, December 2002, retrieved October 18, 2005 "The Trade in Bear Bile", World Animal Protection, 2000, retrieved October 18, 2005 Press Conference on Animal Welfare, Embassy of the People's Republic of China in the United Kingdom of Britain and Northern Ireland, January 12, 2006 Further reading McLaughlin, Kathleen E. "Freeing China's Caged Bile Bears", San Francisco Chronicle, April 25, 2005 "Ending the bear bile industry" World Animal Protection External links - ESDAW website which has video of conditions on bile bear farms website mongobay.com : Asian bear farming: breaking the cycle of exploitation (warning: graphic images) MoonBears.org TheBearTruth.org Animal glandular products Animal keeping by humans Animal products Animal welfare and rights in China Asiatic black bears Cruelty to animals Ethically disputed business practices towards animals Traditional Chinese medicine
Bile bear
[ "Chemistry" ]
5,467
[ "Animal products", "Natural products" ]
2,939,274
https://en.wikipedia.org/wiki/Paris%20green
Paris green (copper(II) acetate triarsenite or copper(II) acetoarsenite) is an arsenic-based organic pigment. As a green pigment it is also known as Mitis green, Schweinfurt green, Sattler green, emerald, Vienna green, Emperor green or Mountain green. It is a highly toxic emerald-green crystalline powder that has been used as a rodenticide and insecticide, and also as a pigment. It was manufactured in 1814 to be a pigment to make a vibrant green paint, and was used by many notable painters in the 19th century. The color of Paris green is said to range from a pale blue green when very finely ground, to a deeper green when coarsely ground. Due to the presence of arsenic, the pigment is extremely toxic. In paintings, the color can degrade quickly. Preparation and structure Paris green may be prepared by combining copper(II) acetate and arsenic trioxide. The structure was confirmed by X-ray crystallography. History In 1814, Paris green was invented by paint manufacturers Wilhelm Sattler and Friedrich Russ, in Schweinfurt, Germany for the Wilhelm Dye and White Lead Company. They were attempting to produce a more stable pigment than Scheele's green, seeking to make a green that was less susceptible to darkening around sulfides. In 1822, the recipe for emerald green was published by Justus von Liebig and André Braconnot. In 1867, the pigment was named Paris green and was officially recognized as the first chemical insecticide in the world. Because of its arsenic content, the pigment was dangerous and toxic to manufacture, often resulting in factory poisonings. At the time, emerald green was praised as a more durable and vibrant substitute for Scheele's green, even though it would later prove to degrade quickly and react with other manufactured paints. Pigment In paintings, the pigment produces a rich, dark green with an undertone of blue. In comparison, Scheele's green is more yellow, and therefore, more lime-green. Paris green became popular in the 19th century because of its brilliant color. It was also called emerald green because of its resemblance to the gemstone's deep color. Permanence The pigment has a tendency to darken and turn brown. The issue was already apparent in the 19th century. In a 1888 study, watercolors with the pigment were shown to darken and turn brown when exposed to natural light and air. Experiments at the turn of the 20th century gave mixed results. Some found that the Paris green degraded slightly while other sources said the pigment was weatherproof. This discrepancy could be due to the fact that each experiment used a different brand of Paris green. Paris green in Descente des Vaches by Théodore Rousseau has changed significantly. Related pigments Similar natural compounds are the minerals chalcophyllite ·36, conichalcite , cornubite ·, cornwallite ·, and liroconite ·4. These minerals range in color from greenish blue to slightly yellowish green. Scheele's green is a chemically simpler, less brilliant, and less permanent, copper-arsenic pigment used for a rather short time before Paris green was first prepared, which was approximately 1814. It was popular as a wallpaper pigment and would degrade, with moisture and molds, to arsine gas. Paris green was used in wallpaper to some extent and may have degraded similarly. Both pigments were once used in printing ink formulations. The ancient Romans used one of them, possibly conichalcite, as a green pigment. The Paris green paint used by the Impressionists is said to have been composed of relatively coarse particles. Later, the chemical was produced with increasingly small grinds and without carefully removing impurities. Its permanence suffered. It is likely that it was ground more finely for use in watercolors and inks. Uses Painting Paris green was widely used by 19th-century artists. It is present in several paintings by Claude Monet and Paul Gauguin, who found its color difficult to replicate with natural materials. Insecticide In 1867, farmers in Illinois and Indiana found that Paris green was effective against the Colorado potato beetle, an aggressive agricultural pest. Despite concerns regarding the safety of using arsenic compounds on food crops, Paris green became the preferred method for controlling the beetle. By the 1880s, Paris green had become the first widespread use of a chemical insecticide in the world. It was also used widely in the Americas to control the tobacco budworm, Heliothis virescens. To kill codling moth, it was mixed with lime and sprayed on fruit trees. Paris green was heavily sprayed by airplane in Italy, Sardinia, and Corsica during 1944 and in Italy in 1945 to control malaria. It was once used to kill rats in Parisian sewers, which is how it acquired its common name. However, the manufacturing of the insecticide caused many health complications for factory workers, and in certain cases was lethal. Bookbindings Throughout the 19th century, Paris green and similar arsenic pigments were used in books, particularly on bookcloth coverings, textblock edges, decorative labels and onlays, and in printed or manual illustrations. The colorant is particularly prevalent in bookbindings from the 1850s and 1860s published in Germany, England, France, and the United States. Use of arsenic-containing pigments waned in the later part of the 19th-century with heightened awareness of their toxicity and the availability of less toxic chromium- and cobalt-based alternatives. Since February 2024, several German libraries have started to block public access to their stock of 19th century books, to check for the degree of poisoning. The Poison Book Project has cataloged books with these bindings. Wallpaper Paris green became a popular paint in mass-produced wallpaper, which is believed to have shortened lifespans. Wallpaper swatches from this era have been preserved in the book Shadows from the Walls of Death. See also List of colors List of inorganic pigments References Further reading Fiedler, I. and Bayard, M. A., "Emerald Green and Scheele’s Green", in Artists' Pigments: A Handbook of Their History and Characteristics, Vol. 3: E.W. Fitzhugh (Ed.) Oxford University Press 1997, pp. 219–271 Spear, Robert J., The Great Gypsy Moth War, A History of the First Campaign in Massachusetts to Eradicate the Gypsy Moth, 1890–1901. University of Massachusetts Press, Amherst and Boston, 2005. External links Case Studies in Environmental Medicine - Arsenic Toxicity How Emerald green is made National Pollutant Inventory – Copper and compounds fact sheet Emerald green, Colourlex Acetates Arsenites Copper(II) compounds Insecticides Organic pigments Rodenticides Shades of green
Paris green
[ "Biology" ]
1,404
[ "Biocides", "Rodenticides" ]
2,939,279
https://en.wikipedia.org/wiki/Plesiomorphy%20and%20symplesiomorphy
In phylogenetics, a plesiomorphy ("near form") and symplesiomorphy are synonyms for an ancestral character shared by all members of a clade, which does not distinguish the clade from other clades. Plesiomorphy, symplesiomorphy, apomorphy, and synapomorphy all mean a trait shared between species because they share an ancestral species. Apomorphic and synapomorphic characteristics convey much information about evolutionary clades and can be used to define taxa. However, plesiomorphic and symplesiomorphic characteristics cannot. The term symplesiomorphy was introduced in 1950 by German entomologist Willi Hennig. Examples A backbone is a plesiomorphic trait shared by birds and mammals, and does not help in placing an animal in one or the other of these two clades. Birds and mammals share this trait because both clades are descended from the same far distant ancestor. Other clades, e.g. snakes, lizards, turtles, fish, frogs, all have backbones and none are either birds nor mammals. Being a hexapod is plesiomorphic trait shared by ants and beetles, and does not help in placing an animal in one or the other of these two clades. Ants and beetles share this trait because both clades are descended from the same far distant ancestor. Other clades, e.g. bugs, flies, bees, aphids, and many more clades, all are hexapods and none are either ants nor beetles. Elytra are a synapomorphy for placing any living species into the beetle clade, Elytra are plesiomorphic between clades of beetles, e.g. they do not distinguish the dung beetles from the horned beetles. The metapleural gland is a synapomorphy for placing any living species into the ant clade. Feathers are a synapomorphy for placing any living species into the bird clade, hair is a synapomorphy for placing any living species into the mammal clade. Note that some mammal species have lost their hair, so the absence of hair does not exclude a species from being a mammal. Another mammalian synapomorphy is milk. All mammals produce milk and no other clade contains animals which produce milk. Feathers, and milk are also apomorphies. Discussion All of these terms are by definition relative, in that a trait can be a plesiomorphy in one context and an apomorphy in another, e.g. having a backbone is plesiomorphic between birds and mammals, but is apomorphic between them and insects. That is birds and mammals are vertebrates for which the backbone is a defining synapomorphic characteristic, while insects are invertebrates for which the absence of a backbone is a defining characteristic. Species should not be grouped purely by morphologic or genetic similarity. Because a plesiomorphic character inherited from a common ancestor can appear anywhere in a phylogenetic tree, its presence does not reveal anything about the relationships within the tree. Thus grouping species requires distinguishing ancestral from derived character states. An example is thermo-regulation in Sauropsida, which is the clade containing the lizards, turtles, crocodiles, and birds. Lizards, turtles, and crocodiles are ectothermic (coldblooded), while birds are endothermic (warmblooded). Being coldblooded is symplesiomorphic for lizards, turtles, and crocodiles, but they do not form a clade, as crocodiles are more closely related to birds than to lizards and turtles. Thus using coldbloodedness as an apomorphic trait to group crocodiles with lizards and turtles, would be an error, and thus it is a plesiomorphic trait shared by these three clades due to their distant common ancestry. See also Apomorphy Autapomorphy Cladistics Synapomorphy Notes References Phylogenetics ca:Plesiomorfia de:Plesiomorphie pt:Plesiomorfia
Plesiomorphy and symplesiomorphy
[ "Biology" ]
849
[ "Bioinformatics", "Phylogenetics", "Taxonomy (biology)" ]
2,939,479
https://en.wikipedia.org/wiki/Fontages%20Airport
Fontages Airport is located east of Fontanges, Quebec, Canada. References James Bay Project Registered aerodromes in Nord-du-Québec
Fontages Airport
[ "Engineering" ]
30
[ "James Bay Project", "Macro-engineering" ]
2,939,579
https://en.wikipedia.org/wiki/Declaration%20of%20Helsinki
The Declaration of Helsinki (DoH, ) is a set of ethical principles regarding human experimentation developed originally in 1964 for the medical community by the World Medical Association (WMA). It is widely regarded as the cornerstone document on human research ethics. It is not a legally binding instrument under international law, but instead draws its authority from the degree to which it has been codified in, or influenced, national or regional legislation and regulations. Its role was described by a Brazilian forum in 2000 in these words: "Even though the Declaration of Helsinki is the responsibility of the World Medical Association, the document should be considered the property of all humanity." Principles The Declaration is morally binding on physicians, and that obligation overrides any national or local laws or regulations, if the Declaration provides for a higher standard of protection of humans than the latter. Investigators still have to abide by local legislation but will be held to the higher standard. Basic principles The fundamental principle is respect for the individual (Article 8), his or her right to self-determination and the right to make informed decisions (Articles 20, 21 and 22) regarding participation in research, both initially and during the course of the research. The investigator's duty is solely to the patient (Articles 2, 3 and 10) or volunteer (Articles 16, 18), and while there is always a need for research (Article 6), the participant's welfare must always take precedence over the interests of science and society (Article 5), and ethical considerations must always take precedence over laws and regulations (Article 9). The recognition of the increased vulnerability of individuals and groups calls for special vigilance (Article 8). It is recognized that when the research participant is incompetent, physically or mentally incapable of giving consent, or is a minor (Articles 23, 24), then allowance should be considered for surrogate consent by an individual acting in the participant's best interest, although his or her consent should still be obtained if at all possible (Article 25). Operational principles Research should be based on a thorough knowledge of the scientific background (Article 11), a careful assessment of risks and benefits (Articles 16, 17), have a reasonable likelihood of benefit to the population studied (Article 19) and be conducted by suitably trained investigators (Article 15) using approved protocols, subject to independent ethical review and oversight by a properly convened committee (Article 13). The protocol should address the ethical issues and indicate that it is in compliance with the Declaration (Article 14). Studies should be discontinued if the available information indicates that the original considerations are no longer satisfied (Article 17). Information regarding the study should be publicly available (Article 16). Ethical principles extend to publication of the results and consideration of any potential conflict of interest (Article 27). Experimental investigations should always be compared against the best methods, but under certain circumstances a placebo or no treatment group may be utilized (Article 29). The interests of the participant after the study is completed should be part of the overall ethical assessment, including assuring their access to the best proven care (Article 30). Wherever possible unproven methods should be tested in the context of research where there is reasonable belief of possible benefit (Article 32). Additional guidelines or regulations Investigators often find themselves in the position of having to follow several different codes or guidelines, and are therefore required to understand the differences between them. One of these is Good Clinical Practice (GCP), an international guide, while each country may also have local regulations such as the Common Rule in the US, in addition to the requirements of the FDA and Office for Human Research Protections (OHRP) in that country. There are a number of available tools which compare these. Other countries have guides with similar roles, such as the Tri-Council Policy Statement in Canada. Additional international guidelines include those of the CIOMS, Nuffield Council and UNESCO. History The Declaration was originally adopted in June 1964 in Helsinki, Finland, and has since undergone eight revisions (the most recent at the General Assembly in October 2024) and two clarifications, growing considerably in length from 11 paragraphs in 1964 to 37 in the 2024 version. The Declaration is an important document in the history of research ethics as it is the first significant effort of the medical community to regulate research itself, and forms the basis of most subsequent documents. Prior to the 1947 Nuremberg Code there was no generally accepted code of conduct governing the ethical aspects of human research, although some countries, notably Germany and Russia, had national policies [3a]. The Declaration developed the ten principles first stated in the Nuremberg Code, and tied them to the Declaration of Geneva (1948), a statement of physicians' ethical duties. The Declaration more specifically addressed clinical research, reflecting changes in medical practice from the term 'Human Experimentation' used in the Nuremberg Code. A notable change from the Nuremberg Code was a relaxation of the conditions of consent, which was 'absolutely essential' under Nuremberg. Now doctors were asked to obtain consent 'if at all possible' and research was allowed without consent where a proxy consent, such as a legal guardian, was available (Article II.1). First revision (1975) The 1975 revision was almost twice the length of the original. It clearly stated that "concern for the interests of the subject must always prevail over the interests of science and society." It also introduced the concept of oversight by an 'independent committee' (Article I.2) which became a system of Institutional Review Boards (IRB) in the US, and research ethics committees or ethical review boards in other countries. In the United States regulations governing IRBs came into effect in 1981 and are now encapsulated in the Common Rule. Informed consent was developed further, made more prescriptive and partly moved from 'Medical Research Combined with Professional Care' into the first section (Basic Principles), with the burden of proof for not requiring consent being placed on the investigator to justify to the committee. 'Legal guardian' was replaced with 'responsible relative'. The duty to the individual was given primacy over that to society (Article I.5), and concepts of publication ethics were introduced (Article I.8). Any experimental manoeuvre was to be compared to the best available care as a comparator (Article II.2), and access to such care was assured (Article I.3). The document was also made gender neutral. Second to fourth revisions (1975–2000) Subsequent revisions between 1975 and 2000 were relatively minor, so the 1975 version was effectively that which governed research over a quarter of a century of relative stability. Second and third revisions (1983, 1989) The second revision (1983) included seeking the consent of minors where possible. The third revision (1989) dealt further with the function and structure of the independent committee. However, from 1993 onwards, the Declaration was not alone as a universal guide since CIOMS and the World Health Organization (WHO) had also developed their International Ethical Guidelines for Biomedical Research Involving Human Subjects. Fourth revision (1996) Background The AIDS Clinical Trials Group (ACTG) Study 076 of 100 Zidovudine in maternal-infant transmission of HIV had been published in 1994. This was a placebo controlled trial which showed a reduction of nearly 70% in the risk of transmission, and Zidovudine became a de facto standard of care. The subsequent initiation of further placebo controlled trials carried out in developing countries and funded by the United States Centers for Disease Control or National Institutes of Health raised considerable concern when it was learned that patients in trials in the US had essentially unrestricted access to the drug, while those in developing countries did not. Justification was provided by a 1994 WHO group in Geneva which concluded "Placebo-controlled trials offer the best option for a rapid and scientifically valid assessment of alternative antiretroviral drug regimens to prevent transmission of HIV". These trials appeared to be in direct conflict with recently published guidelines for international research by CIOMS, which stated "The ethical standards applied should be no less exacting than they would be in the case of research carried out in country", referring to the sponsoring or initiating country. In fact a schism between ethical universalism and ethical pluralism was already apparent before the 1993 revision of the CIOMS guidelines. Fourth revision In retrospect, this was one of the most significant revisions because it added the phrase "This does not exclude the use of inert placebo in studies where no proven diagnostic or therapeutic method exists" to Article II.3 ("In any medical study, every patient--including those of a control group, if any—should be assured of the best proven diagnostic and therapeutic method."). Critics claimed that the Zidovudine trials in developing countries were in breach of this because Zidovudine was now the best proven treatment and the placebo group should have been given it. This led to the US Food and Drug Administration (FDA) ignoring this and all subsequent revisions. Fifth revision (2000) Background Following the fourth revision in 1996 pressure began to build almost immediately for a more fundamental approach to revising the declaration. The later revision in 2000 would go on to require monitoring of scientific research on human subjects to assure ethical standards were being met. In 1997 Lurie and Wolfe published their seminal paper on HIV trials, raising awareness of a number of central issues. These included the claims that the continuing trials in developing countries were unethical, and pointing out a fundamental discrepancy in decisions to change the study design in Thailand but not Africa. The issue of the use of placebo in turn raised questions about the standard of care in developing counties and whether, as Marcia Angell wrote "Human subjects in any part of the world should be protected by an irreducible set of ethical standards" (1988). The American Medical Association put forward a proposed revision in November that year, and a proposed revision (17.C/Rev1/99) was circulated the following year, causing considerable debate and resulting in a number of symposia and conferences. Recommendations included limiting the document to basic guiding principles. Many editorials and commentaries were published reflecting a variety of views including concerns that the Declaration was being weakened by a shift towards efficiency-based and utilitarian standards (Rothman, Michaels and Baum 2000), and an entire issue of the Bulletin of Medical Ethics was devoted to the debate. Others saw it as an example of Angell's 'Ethical Imperialism', an imposition of US needs on the developing world, and resisted any but the most minor changes, or even a partitioned document with firm principles and commentaries, as used by CIOMS. The idea of ethical imperialism was brought into high attention with HIV testing, as it was strongly debated from 1996 to 2000 because of its centrality to the issue of regimens to prevent its vertical transmission. Brennan summarises this by stating "The principles exemplified by the current Declaration of Helsinki represent a delicate compromise that we should modify only after careful deliberation". Nevertheless, what had started as a controversy over a specific series of trials and their designs in Sub-Saharan Africa, now had potential implications for all research. These implications further came into public view since the Helsinki declaration had stated, "In the treatment of the sick person, the physician must be free to use a new diagnostic and therapeutic measure, if in his or her judgement, it offers hope of saving life, reestablishing health or alleviating suffering." Fifth revision Even though most meetings about the proposed revisions failed to achieve consensus, and many argued that the declaration should remain unchanged or only minimally altered, after extensive consultation the Workgroup eventually came up with a text what that was endorsed by WMA's Council and passed by the General Assembly on October 7, 2000, and which proved to be the most far reaching and contentious revision to date. The justification for this was partly to take account of expanded scope of biomedical research since 1975. This involved a restructuring of the document, including renumbering and re-ordering of all the articles, the changes in which are outlined in this Table . The Introduction establishes the rights of subjects and describes the inherent tension between the need for research to improve the common good, and the rights of the individual. The Basic Principles establish a guide for judging to what extent proposed research meets the expected ethical standards. The distinction between therapeutic and non-therapeutic research introduced in the original document, criticised by Levine was removed to emphasize the more general application of ethical principles, but the application of the principles to healthy volunteers is spelt out in Articles 18–9, and they are referred to in Article 8 ('those who will not benefit personally from the research') as being especially vulnerable. The scope of ethical review was increased to include human tissue and data (Article 1), the necessity to challenge accepted care was added (Article 6), as well as establishing the primacy of the ethical requirements over laws and regulations (Article 9). Amongst the many changes was an increased emphasis on the need to benefit the communities in which research is undertaken, and to draw attention to the ethical problems of experimenting on those who would not benefit from the research, such as developing countries in which innovative medications would not be available. Article 19 first introduces the concept of social justice, and extends the scope from individuals to the community as a whole by stating that 'research is only justified if there is a reasonable likelihood that the populations in which the research is carried out stand to benefit from the results of the research'. This new role for the Declaration has been both denounced and praised, Macklin R. Future challenges for the Declaration of Helsinki: Maintaining credibility in the face of ethical controversies. Address to Scientific Session, World Medical Association General Assembly, September 2003, Helsinki and even considered for a clarification footnote. Article 27 expanded the concept of publication ethics, adding the necessity to disclose conflict of interest (echoed in Articles 13 and 22), and to include publication bias amongst ethically problematic behavior. Additional principles The most controversial revisions (Articles 29, 30) were placed in this new category. These predictably were those that like the fourth revision were related to the ongoing debate in international health research. The discussions indicate that there was felt a need to send a strong signal that exploitation of poor populations as a means to an end, by research from which they would not benefit, was unacceptable. In this sense the Declaration endorsed ethical universalism. Article 29 restates the use of placebo where 'no proven' intervention exists. Surprisingly, although the wording was virtually unchanged, this created far more protest in this revision. The implication being that placebos are not permitted where proven interventions are available. The placebo question was already an active debate prior to the fourth revision but had intensified, while at the same time the placebo question was still causing controversy in the international setting. This revision implies that in choosing a study design, developed-world standards of care should apply to any research conducted on human subjects, including those in developing countries. The wording of the fourth and fifth revisions reflect the position taken by Rothman and Michel and Freedman et al., known as 'active-control orthodoxy'. The opposing view, as expressed by Levine and by Temple and Ellenberg is referred to as 'placebo orthodoxy', insisting that placebo controls are more scientifically efficient and are justifiable where the risk of harm is low. This viewpoint argues that where no standards of care exist, as for instance in developing countries, then placebo-controlled trials are appropriate. The utilitarian argument held that the disadvantage to a few (such as denial of potentially beneficial interventions) was justifiable for the advantage of many future patients. These arguments are intimately tied to the concept of distributive justice, the equitable distribution of the burdens of research. As with much of the Declaration, there is room for interpretation of words. 'Best current' has been variously held to refer to either global or local contexts. Article 30 introduced another new concept, that after the conclusion of the study patients 'should be assured of access to the best proven' intervention arising from the study, a justice issue. Arguments over this have dealt with whether subjects derive benefit from the trial and are no worse off at the end than the status quo prior to the trial, or of not participating, versus the harm of being denied access to that which they have contributed to. There are also operational issues that are unclear. Aftermath Given the lack of consensus on many issues prior to the fifth revision it is no surprise that the debates continued unabated. The debate over these and related issues also revealed differences in perspectives between developed and developing countries. Zion and colleagues (Zion 2000) have attempted to frame the debate more carefully, exploring the broader social and ethical issues and the lived realities of potential subjects' lives as well as acknowledging the limitations of absolute universality in a diverse world, particularly those framed in a context that might be considered elitist and structured by gender and geographic identity. As Macklin points out, both sides may be right, since justice "is not an unambiguous concept". Clarifications of Articles 29, 30 (2002–2004) Eventually Notes of Clarification (footnotes) to articles 29 and 30 were added in 2002 and 2004 respectively, predominantly under pressure from the US (CMAJ 2003, Blackmer 2005). The 2002 clarification to Article 29 was in response to many concerns about WMA's apparent position on placebos. As WMA states in the note, there appeared to be 'diverse interpretations and possibly confusion'. It then outlined circumstances in which a placebo might be 'ethically acceptable', namely 'compelling... methodological reasons', or 'minor conditions' where the 'risk of serious or irreversible harm' was considered low. Effectively this shifted the WMA position to what has been considered a 'middle ground'. Given the previous lack of consensus, this merely shifted the ground of debate, which now extended to the use of the 'or' connector. For this reason the footnote indicates that the wording must be interpreted in the light of all the other principles of the Declaration. Article 30 was debated further at the 2003 meeting, with another proposed clarification but did not result in any convergence of thought, and so decisions were postponed for another year, but again a commitment was made to protecting the vulnerable. A new working group examined article 30, and recommended not amending it in January 2004. Later that year the American Medical Association proposed a further note of clarification that was incorporated. In this clarification the issue of post trial care now became something to consider, not an absolute assurance. Despite these changes, as Macklin predicted, consensus was no closer and the Declaration was considered by some to be out of touch with contemporary thinking, and even the question of the future of the Declaration became a matter for conjecture. Considerable deliberation has taken place regarding the most effective approach to address the concerns related to paragraph 30. Two distinct working groups have explored this matter and put forth various suggestions, which encompass potential revisions to the paragraph, the inclusion of a preamble, and the introduction of a clarifying note (similar to what was incorporated into paragraph 29). At a gathering of the WMA Council in France in May 2004, the American Medical Association presented the subsequent clarifying statement: The WMA reaffirms its stance that it is imperative, within the study planning phase, to identify provisions for post-trial access by research participants to prophylactic, diagnostic, and therapeutic procedures deemed beneficial in the study or to access to other appropriate healthcare. The specifics of post-trial access arrangements or alternative care should be outlined in the study protocol, enabling the ethical review committee to evaluate these provisions during its assessment. Sixth revision (2008) The sixth revision cycle commenced in May 2007. This consisted of a call for submissions, completed in August 2007. The terms of reference included only a limited revision compared to 2000. In November 2007 a draft revision was issued for consultation until February 2008, and led to a workshop in Helsinki in March. Those comments were then incorporated into a second draft in May. Further workshops were held in Cairo and São Paulo and the comments collated in August 2008. A final text was then developed by the Working Group for consideration by the Ethics Committee and finally the General Assembly, which approved it on October 18. Public debate was relatively slight compared to previous cycles, and in general supportive. Input was received from a wide number of sources, some of which have been published, such as Feminist Approaches to Bioethics. Others include CIOMS and the US Government. Seventh revision (2013) The seventh revision of Helsinki (2013) was reflective of the controversy regarding the standard of care that arose from the vertical transmission trials. The revised declaration of 2013 also highlights the need to disseminate research results, including negative and inconclusive studies and also includes a requirement for treatment and compensation for injuries related to research. In addition, the updated version is felt to be more relevant to limited resource settings—specifically addressing the need to ensure access to an intervention if it is proven effective. Eighth revision (2024) The eighth revision of Helsinki (2024) newly highlights the roles of global inequities in medical research and includes a new statement that scientific integrity "is essential in the conduct of medical research involving human participants. Involved individuals, teams, and organizations must never engage in research misconduct". Future The controversies and national divisions over the text have continued. The US FDA rejected the 2000 and subsequent revisions, only recognizing the third (1989) revision, and in 2006 announced it would eliminate all reference to the Declaration. After consultation, which included expressions of concern, a final rule was issued on April 28, 2008, replacing the Declaration of Helsinki with Good Clinical Practice effective October 2008. This has raised a number of concerns regarding the apparent weakening of protections for research subjects outside the United States. The NIH training in human subject research participant protection no longer refers to the Declaration of Helsinki. The European Union similarly only cites the 1996 version in the EU Clinical Trials Directive published in 2001. The European Commission, however, does refer to the 2000 revision. While the Declaration has been a central document guiding research practice, its future has been called into question. Challenges include the apparent conflict between guides, such as the CIOMS and Nuffield Council documents. Another is whether it should concentrate on basic principles as opposed to being more prescriptive, and hence controversial. It has continually grown and faced more frequent revisions. The recent controversies undermine the authority of the document, as does the apparent desertion by major bodies, and any rewording must embrace deeply and widely held values, since continual shifts in the text do not imply authority. The actual claim to authority, particularly on a global level, by the insertion of the word "international" in article 10 has been challenged. Carlson raises the question as to whether the document's utility should be more formally evaluated, rather than just relying on tradition. The Declaration's long-standing pre-eminence There appears to be a noticeable trend toward more frequent changes in the Declaration of Helsinki (DoH). However, it's important to note that only two of the revisions, in 1975 and 2000, introduced significant alterations. This means that there was an 11-year gap between comprehensive revisions (from 1964 to 1975) and a 25-year gap (from 1975 to 2000), respectively. Consequently, the DoH, essentially in its 1975 version, had a quarter-century to establish itself within the medical research community, and this has significantly contributed to its current status. The World Medical Association (WMA) One potential explanation is that it derives its legitimacy from being an official declaration of the World Medical Association (WMA). This organization represents the largest global assembly of physicians, and consequently, it could be argued that the WMA is a credible and authoritative entity for issuing statements on behalf of the medical profession as a whole. However, a historical observation appears to challenge the notion that this explains the Declaration of Helsinki's authority. It can be argued that the Declaration was most widely accepted as an authoritative document during the period from the late 1970s (after the 1975 amendment had been widely promulgated) to the mid-to-late 1990s when increasing demands for changes to the Declaration began to emerge. Notably, this period was marked by significant internal unrest within the WMA. In the 1980s, a group of countries, known as the 'Toronto Group,' which included the UK, withdrew from the WMA due to persistent objections related to the South African Medical Association's failure to denounce apartheid. Historical events eventually led to the reconciliation of this division, and all the countries that had previously withdrawn had rejoined the WMA by 1995. Timeline (WMA meetings) 1964: Original version. 18th Meeting, Helsinki 1975: First revision. 29th Meeting, Tokyo 1983: Second revision. 35th Meeting, Venice 1989: Third revision. 41st Meeting, Hong Kong 1996: Fourth revision. 48th Meeting, Somerset West (South Africa) 2000: Fifth revision. 52nd Meeting, Edinburgh 2002: First clarification, Washington 2004: Second clarification, Tokyo 2008: Sixth revision, 59th Meeting, Seoul 2013: Seventh revision, 64th Meeting, Fortaleza 2024: Eighth revision, 75th Meeting, Helsinki Other notable developments 2014: This was the 50th anniversary of declaration. To mark this special occasion, the WMA published "The World Medical Association Declaration of Helsinki: 1964-2014 50 Years of Evolution of Medical Research Ethics.". 2016: The Declaration of Taipei on Ethical Considerations regarding Health Databases and Biobanks finally complemented the Declaration of Helsinki. See also Informed consent Medical ethics Clinical trial Human experimentation in the United States Clinical Research References Training U.S. National Institutes of Health (NIH) - Protecting Human Subject Research Participants Bibliography Articles 1990-1999 2000-2008 Prior to fifth revision Following fifth revision Vastag B. Helsinki Discord? A Controversial Declaration. JAMA 2000 Dec 20 284:2983-2985 (password required) (References) Singer P, Benatar S. Beyond Helsinki: a vision for global health ethics. BMJ 2001 March 31 322:747-748 S Frewer A, Schmidt U, eds. History and theory of human experimentation: the Declaration of Helsinki and modern medical ethics. Stuttgart: Franz Steiner Verlag, 2007. Following sixth revision WMA News: Revising the Declaration of Helsinki. World Medical Journal 2008; 54(4): 120-25 WMA International response to Helsinki VI (2000). WMA 2001 Other codes and regulations Nuremberg Code Declaration of Helsinki Belmont Report CIOMS Good clinical practice (GCP) International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use Code of Federal Regulations External links Nuremberg Code Declaration of Geneva 1948 Declaration of Helsinki: 1983 (Second revision) Declaration of Helsinki: 2000 (Fifth revision, with footnotes from 2002, 2004) Declaration of Helsinki: 2013 (Seventh revision - Current) International ethical guidelines for biomedical research involving human subjects. 2002 CIOMS WMA Medical Ethics Manual 2005 CIOMS UNESCO: Universal declaration on bioethics and human rights. 2005 CFR Title 45 Public Welfare CFR Title 45 Part 46 Protection of Human Subjects Tri-Council Policy Statement: Ethical conduct for research involving humans (Canada) Research ethics Clinical research ethics 1960s in Helsinki 1964 in Finland 1964 documents
Declaration of Helsinki
[ "Technology" ]
5,599
[ "Research ethics", "Ethics of science and technology" ]
2,939,751
https://en.wikipedia.org/wiki/Methylparaben
Methylparaben (methyl paraben) one of the parabens, is a preservative with the chemical formula CH3(C6H4(OH)COO). It is the methyl ester of p-hydroxybenzoic acid. Natural occurrences Methylparaben serves as a pheromone for a variety of insects and is a component of queen mandibular pheromone. It is a pheromone in wolves produced during estrus associated with the behavior of alpha male wolves preventing other males from mounting females in heat. Uses Methylparaben is an anti-fungal agent often used in a variety of cosmetics and personal-care products. It is also used as a food preservative and has the E number E218. Methylparaben is commonly used as a fungicide in Drosophila food media at 0.1%. To Drosophila, methylparaben is toxic at higher concentrations, has an estrogenic effect (mimicking estrogen in rats and having anti-androgenic activity), and slows the growth rate in the larval and pupal stages at 0.2%. Safety There is controversy about whether methylparaben or propylparabens are harmful at concentrations typically used in body care or cosmetics. Methylparaben and propylparaben are considered generally recognized as safe (GRAS) by the USFDA for food and cosmetic antibacterial preservation. Methylparaben is readily metabolized by common soil bacteria, making it completely biodegradable. Methylparaben is readily absorbed from the gastrointestinal tract or through the skin. It is hydrolyzed to p-hydroxybenzoic acid and rapidly excreted in urine without accumulating in the body. Acute toxicity studies have shown that methylparaben is practically non-toxic by both oral and parenteral administration in animals. In a population with normal skin, methylparaben is practically non-irritating and non-sensitizing; however, allergic reactions to ingested parabens have been reported. A 2008 study found no competitive binding for human estrogen and androgen receptors for methylparaben, but varying levels of competitive binding were seen with butyl- and isobutyl-paraben. Studies indicate that methylparaben applied on the skin may react with UVB, leading to increased skin aging and DNA damage. References External links Methylparaben at Hazardous Substances Data Bank Methylparaben at Household Products Database European Commission Scientific Committee on Consumer Products Extended Opinion on the Safety Evaluation of Parabens (2005) Methyl esters Parabens E-number additives Semiochemicals Insect pheromones ja:メチルパラベン
Methylparaben
[ "Chemistry" ]
561
[ "Insect pheromones", "Chemical ecology", "Semiochemicals" ]
2,939,955
https://en.wikipedia.org/wiki/Attention%20Profiling%20Mark-up%20Language
Attention Profiling Mark-up Language (APML) is an XML-based markup language for documenting a person's interests and dislikes. Overview APML allows people to share their own personal attention profile in much the same way that OPML allows the exchange of reading lists between news readers. The idea behind APML is to compress all forms of attention data into a portable file format containing a description of the user's rated interests. The APML Workgroup The APML Workgroup is tasked with maintaining and refining the APML specification. The APML Workgroup is made up of industry experts and leaders and was founded by Chris Saad and Ashley Angell. The workgroup allows public recommendations and input, and actively evangelises the public's "Attention Rights". The workgroup also adheres to the principles of Media 2.0 Best Practices. Services Services that have adopted APML Bloglines was an RSS reader. It was one of the major RSS readers on the web, with its main competitor being Google Reader. Bloglines announced it would support APML. OpenLink Data Spaces is a Distributed Collaborative Web Application Platform, Social Network and Content Management System. Specifications See also Digital traces References XML-based standards XML markup languages
Attention Profiling Mark-up Language
[ "Technology" ]
256
[ "Computer standards", "XML-based standards" ]
2,940,678
https://en.wikipedia.org/wiki/Backhousia%20citriodora
Backhousia citriodora, commonly known as lemon myrtle, lemon scented myrtle or lemon scented ironwood, is a flowering plant in the family Myrtaceae. It is native to the subtropical rainforests of central and south-eastern Queensland, Australia, with a natural distribution from Mackay to Brisbane. Description and ecology The species can reach in height, but is often smaller. The leaves are evergreen, opposite, lanceolate, long and broad, glossy green, with an entire margin. The flowers are creamy-white, in diameter, produced in clusters at the ends of the branches from summer through to autumn. After petal fall, the calyx is persistent. A significant fungal pathogen, myrtle rust (Uredo rangelii) was detected in lemon myrtle plantations in January 2011. Myrtle rust severely damages new growth and threatens lemon myrtle production. Etymology Lemon myrtle was given the botanical name Backhousia citriodora by Ferdinand von Mueller in 1853 after his friend, the English botanist, James Backhouse. The common name reflects the strong lemon smell of the crushed leaves. 'Lemon scented myrtle' was the primary common name until the shortened trade name, 'lemon myrtle', was created by the native foods industry to market the leaf for culinary use. Lemon myrtle is now the more common name for the plant and its products. Lemon myrtle is sometimes confused with 'lemon ironbark', which is Eucalyptus staigeriana. Other common names are sweet verbena tree, lemon scented verbena (not to be confused with lemon verbena), and sweet verbena myrtle. Uses History Aboriginal Australians have long used lemon myrtle, both in cuisine and as a healing plant. The oil has the highest citral purity; typically higher than lemongrass. It is also considered to have a "cleaner and sweeter" aroma than comparable sources of citral–lemongrass and Litsea cubeba. In 1888, Bertram first isolated the essential oil from B. citriodora. In 1925, it was found to be significantly germicidal, and it was later shown to be antimicrobial. In the 1940s, Tarax was the first company to use B. citriodora oil as a lemon flavouring during World War II. In 1989, B. citriodora was investigated as a potential leaf spice and commercial crop by Peter Hardwick, who commissioned the Wollongbar Agricultural Institute to analyse B. citriodora selections using gas chromatography. In 2001, a Standards for Oil of B. citriodora was established by The Essential Oils Unit, Wollongbar, and Standards Australia. Culinary Lemon myrtle is one of the well known bushfood flavours and is sometimes referred to as the "Queen of the lemon herbs". The leaf is often used as dried flakes, or in the form of an encapsulated flavour essence for enhanced shelf-life. It has a range of uses, such as lemon myrtle flakes in shortbread; flavouring in pasta; whole leaf with baked fish; infused in macadamia or vegetable oils; and made into tea, including tea blends. It can also be used as a lemon flavour replacement in milk-based foods, such as cheesecake, lemon flavoured ice-cream and sorbet without the curdling problem associated with lemon fruit acidity. Backhousia citriodora has two essential oil chemotypes. The citral chemotype is more prevalent and is cultivated in Australia for flavouring and essential oil. Citral as an isolate in steam distilled lemon myrtle oil is typically 90–98%, and oil yield 1–3% from fresh leaf. The citronellal chemotype is uncommon, and can be used as an insect repellent. The dried leaf has free radical scavenging ability. Antimicrobial Lemon myrtle essential oil possesses antimicrobial properties; however, the undiluted essential oil is toxic to human cells in vitro. When diluted to approximately 1%, absorption through the skin and subsequent damage is thought to be minimal. Lemon myrtle oil has a high Rideal–Walker coefficient, a measure of antimicrobial potency. Use of lemon myrtle oil as a treatment for skin lesions caused by molluscum contagiosum virus (MCV), a disease typically affecting children and immuno-compromised patients, has been investigated. Nine of sixteen patients who were treated with 10% strength lemon myrtle oil showed a significant improvement, compared to none in the control group. A study in 2003 which investigated the effectiveness of different preparations of lemon myrtle against bacteria and fungi concluded that the plant had potential as an antiseptic or as a surface disinfectant, or as an anti-microbial food additive. The oil is a popular ingredient in health care and cleaning products, especially soaps, lotions, skin-whitening preparations and shampoos. Cultivation Lemon myrtle is a cultivated ornamental plant. It can be grown from tropical to warm temperate climates, and may handle cooler districts provided it can be protected from frost when young. In cultivation it rarely exceeds about and usually has a dense canopy. The principal attraction to gardeners is the lemon smell, which perfumes both the leaves and flowers of the tree. Lemon myrtle is a hardy plant, which tolerates all but the poorest drained soils. It can be slow growing but responds well to slow-release fertilisers. Seedling lemon myrtle go through a shrubby, slow juvenile growth stage, before developing a dominant trunk. Lemon myrtle can also be propagated from cutting, but is slow to strike. A study into the plant growing adventitious roots found that "actively growing axillary buds, wide stems and mature leaves" are good indicators that a cutting will take root successfully and survive. A further study on temperature recommended glasshouses for growing cuttings throughout the year. Growing cuttings from mature trees bypasses the shrubby juvenile stage. Cutting propagation is also used to provide a consistent product in commercial production. In plantation cultivation the tree is typically maintained as a shrub by regular harvesting from the top and sides. Mechanical harvesting is used in commercial plantations. It is important to retain some lower branches when pruning for plant health. The harvested leaves are dried for leaf spice, or distilled for the essential oil. The majority of commercial lemon myrtle is grown in Queensland and the north coast of New South Wales, Australia. A 2009 study has suggested that drying lemon myrtle leaves at higher temperatures improves the citral content of the dried leaves, but discolours the leaves more. See also Citral Lemon verbena References Further reading APNI Australian Plant Name Index External links Australian Bushfood and Native Medicine Forum Broad range of lemon myrtle products and recipes Lemon Myrtle from Vic Cherikoff citriodora Flora of Queensland Myrtales of Australia Trees of Australia Bushfood Crops originating from Australia Medicinal plants of Australia Essential oils Taxa named by Ferdinand von Mueller
Backhousia citriodora
[ "Chemistry" ]
1,423
[ "Essential oils", "Natural products" ]
2,940,689
https://en.wikipedia.org/wiki/Glycogen%20debranching%20enzyme
The glycogen debranching enzyme, in humans, is the protein encoded by the gene AGL. This enzyme is essential for the breakdown of glycogen, which serves as a store of glucose in the body. It has separate glucosyltransferase and glucosidase activities. Together with phosphorylases, the enzyme mobilize glucose reserves from glycogen deposits in the muscles and liver. This constitutes a major source of energy reserves in most organisms. Glycogen breakdown is highly regulated in the body, especially in the liver, by various hormones including insulin and glucagon, to maintain a homeostatic balance of blood-glucose levels. When glycogen breakdown is compromised by mutations in the glycogen debranching enzyme, metabolic diseases such as Glycogen storage disease type III can result. The two steps of glycogen breakdown, glucosyltransferase and glucosidase, are performed by a single enzyme in mammals, yeast, and some bacteria, but by two distinct enzymes in E. coli and other bacteria, complicating nomenclature. Proteins that catalyze both functions are referred to as glycogen debranching enzymes (GDEs). When glucosyltransferase and glucosidase are catalyzed by distinct enzymes, glycogen debranching enzyme usually refers to the glucosidase enzyme. In some literature, an enzyme capable only of glucosidase is referred to as a debranching enzyme. Function Together with phosphorylase, glycogen debranching enzymes function in glycogen breakdown and glucose mobilization. When phosphorylase has digested a glycogen branch down to four glucose residues, it will not remove further residues. Glycogen debranching enzymes assist phosphorylase, the primary enzyme involved in glycogen breakdown, in the mobilization of glycogen stores. Phosphorylase can only cleave α-1,4-glycosidic bond between adjacent glucose molecules in glycogen but branches also exist as α-1,6 linkages. When phosphorylase reaches four residues from a branching point it stops cleaving; because 1 in 10 residues is branched, cleavage by phosphorylase alone would not be sufficient in mobilizing glycogen stores. Before phosphorylase can resume catabolism, debranching enzymes perform two functions: 4-α-D-glucanotransferase (), or glucosyltransferase, transfers three glucose residues from the four-residue glycogen branch to a nearby branch. This exposes a single glucose residue joined to the glucose chain through an α-1,6 glycosidic linkage Amylo-α-1,6-glucosidase (), or glucosidase, cleaves the remaining alpha-1,6 linkage, producing glucose and a linear chain of glycogen. The mechanism by which the glucosidase cleaves the α -1,6-linkage is not fully known because the amino acids in the active site have not yet been identified. It is thought to proceed through a two step acid base assistance type mechanism, with an oxocarbenium ion intermediate, and retention of configuration in glucose. This is a common method through which to cleave bonds, with an acid below the site of hydrolysis to lend a proton and a base above to deprotinate a water which can then act as a nucleophile. These acids and bases are amino acid side chains in the active site of the enzyme. A scheme for the mechanism is shown in the figure. Thus the debranching enzymes, transferase and α-1,6-glucosidase converts the branched glycogen structure into a linear one, paving the way for further cleavage by phosphorylase. Structure and activity Two enzymes In E. coli and other bacteria, glucosyltransferase and glucosidase functions are performed by two distinct proteins. In E. coli, Glucose transfer is performed by 4-alpha-glucanotransferase, a 78.5 kDa protein coded for by the gene malQ. A second protein, referred to as debranching enzyme, performs α-1,6-glucose cleavage. This enzyme has a molecular mass of 73.6 kDa, and is coded for by the gene glgX. Activity of the two enzymes is not always necessarily coupled. In E. coli glgX selectively catalyzes the cleavage of 4-subunit branches, without the action of glucanotransferase. The product of this cleavage, maltotetraose, is further degraded by maltodextrin phosphorylase. E. coli GlgX is structurally similar to the protein isoamylase. The monomeric protein contains a central domain in which eight parallel beta-strands are surrounded by eight parallel alpha strands. Notable within this structure is a groove 26 angstroms long and 9 angstroms wide, containing aromatic residues that are thought to stabilize a four-glucose branch before cleavage. The glycogen-degrading enzyme of the archaea Sulfolobus solfataricus, treX, provides an interesting example of using a single active site for two activities: amylosidase and glucanotransferase activities. TreX is structurally similar to glgX, and has a mass of 80kD and one active site. Unlike either glgX, however, treX exists as a dimer and tetramer in solution. TreX's oligomeric form seems to play a significant role in altering both enzyme shape and function. Dimerization is thought to stabilize a "flexible loop" located close to the active site. This may be key to explaining why treX (and not glgX) shows glucosyltransferase activity. As a tetramer, the catalytic efficiency of treX is increased fourfold over its dimeric form. One enzyme with two catalytic sites In mammals and yeast, a single enzyme performs both debranching functions. The human glycogen debranching enzyme (gene: AGL) is a monomer with a molecular weight of 175 kDa. It has been shown that the two catalytic actions of AGL can function independently of each other, demonstrating that multiple active sites are present. This idea has been reinforced with inhibitors of the active site, such as polyhydroxyamine, which were found to inhibit glucosidase activity while transferase activity was not measurably changed. Glycogen debranching enzyme is the only known eukaryotic enzyme that contains multiple catalytic sites and is active as a monomer. Some studies have shown that the C-terminal half of yeast GDE is associated with glucosidase activity, while the N-terminal half is associated with glucosyltransferase activity. In addition to these two active sites, AGL appears to contain a third active site that allows it to bind to a glycogen polymer. It is thought to bind to six glucose molecules of the chain as well as the branched glucose, thus corresponding to 7 subunits within the active site, as shown in the figure below. The structure of the Candida glabrata GDE has been reported. The structure revealed that distinct domains in GDE encode the glucanotransferase and glucosidase activities. Their catalyses are similar to that of alpha-amylase and glucoamylase, respectively. Their active sites are selective towards the respective substrates, ensuring proper activation of GDE. Besides the active sites GDE have additional binding sites for glycogen, which are important for its recruitment to glycogen. Mapping the disease-causing mutations onto the GDE structure provided insights into glycogen storage disease type III. Genetic location The official name for the gene is "amylo-α-1,6-glucosidase, 4-α-glucanotransferase", with the official symbol AGL. AGL is an autosomal gene found on chromosome 1p21. The AGL gene provides instructions for making several different versions, known as isoforms, of the glycogen debranching enzyme. These isoforms vary by size and are expressed in different tissues, such as liver and muscle. This gene has been studied in great detail, because mutation at this gene is the cause of Glycogen Storage Disease Type III. The gene is 85 kb long, has 35 exons and encodes for a 7.0 kb mRNA. Translation of the gene begins at exon 3, which encodes for the first 27 amino acids of the AGL gene, because the first two exons (68kb) contain the 5' untranslated region. Exons 4-35 encode the remaining 1505 amino acids of the AGL gene. Studies produced by the department of pediatrics at Duke University suggest that the human AGL gene contains at minimum 2 promotor regions, sites where the transcription of the gene begins, that result in differential expression of isoform, different forms of the same protein, mRNAs in a manner that is specific for different tissues. Clinical significance When GDE activity is compromised, the body cannot effectively release stored glycogen, type III Glycogen Storage Disease (debrancher deficiency), an autosomal recessive disorder, can result. In GSD III glycogen breakdown is incomplete and there is accumulation of abnormal glycogen with short outer branches. Most patients exhibit GDE defiency in both liver and muscle (Type IIIa), although 15% of patients have retained GDE in muscle while having it absent from the liver (Type IIIb). Depending on mutation location, different mutations in the AGL gene can affect different isoforms of the gene expression. For example, mutations that occur on exon 3, affect the form which affect the isoform that is primarily expressed in the liver; this would lead to GSD type III. These different manifestation produce varied symptoms, which can be nearly indistinguishable from Type I GSD, including hepatomegaly, hypoglycemia in children, short stature, myopathy, and cardiomyopathy. Type IIIa patients often exhibit symptoms related to liver disease and progressive muscle involvement, with variations caused by age of onset, rate of disease progression and severity. Patients with Type IIIb generally symptoms related to liver disease. Type III patients be distinguished by elevated liver enzymes, with normal uric acid and blood lactate levels, differing from other forms of GSD. In patients with muscle involvement, Type IIIa, the muscle weakness becomes predominant into adulthood and can lead to ventricular hypertrophy and distal muscle wasting. References External links GeneReviews/NCBI/NIH/UW entry on Glycogen Storage Disease Type III OMIM entries on Glycogen Storage Disease Type III Carbohydrate metabolism EC 2.4.1 EC 3.2.1
Glycogen debranching enzyme
[ "Chemistry" ]
2,373
[ "Carbohydrate metabolism", "Carbohydrate chemistry", "Metabolism" ]
2,940,855
https://en.wikipedia.org/wiki/Fiber%20Bragg%20grating
A fiber Bragg grating (FBG) is a type of distributed Bragg reflector constructed in a short segment of optical fiber that reflects particular wavelengths of light and transmits all others. This is achieved by creating a periodic variation in the refractive index of the fiber core, which generates a wavelength-specific dielectric mirror. Hence a fiber Bragg grating can be used as an inline optical filter to block certain wavelengths, can be used for sensing applications, or it can be used as wavelength-specific reflector. History The first in-fiber Bragg grating was demonstrated by Ken Hill in 1978. Initially, the gratings were fabricated using a visible laser propagating along the fiber core. In 1989, Gerald Meltz and colleagues demonstrated the much more flexible transverse holographic inscription technique where the laser illumination came from the side of the fiber. This technique uses the interference pattern of ultraviolet laser light to create the periodic structure of the fiber Bragg grating. Theory The fundamental principle behind the operation of an FBG is Fresnel reflection, where light traveling between media of different refractive indices may both reflect and refract at the interface. The refractive index will typically alternate over a defined length. The reflected wavelength (), called the Bragg wavelength, is defined by the relationship, where is the effective refractive index of the fiber core and is the grating period. The effective refractive index quantifies the velocity of propagating light as compared to its velocity in vacuum. depends not only on the wavelength but also (for multimode waveguides) on the mode in which the light propagates. For this reason, it is also called modal index. The wavelength spacing between the first minima (nulls, see Fig. 2), or the bandwidth (), is (in the strong grating limit) given by, where is the variation in the refractive index (), and is the fraction of power in the core. Note that this approximation does not apply to weak gratings where the grating length, , is not large compared to \ . The peak reflection () is approximately given by, where is the number of periodic variations. The full equation for the reflected power (), is given by, where, Types of gratings The term type in this context refers to the underlying photosensitivity mechanism by which grating fringes are produced in the fiber. The different methods of creating these fringes have a significant effect on physical attributes of the produced grating, particularly the temperature response and ability to withstand elevated temperatures. Thus far, five (or six) types of FBG have been reported with different underlying photosensitivity mechanisms. These are summarized below: Standard, or type I, gratings Written in both hydrogenated and non-hydrogenated fiber of all types, type I gratings are usually known as standard gratings and are manufactured in fibers of all types under all hydrogenation conditions. Typically, the reflection spectra of a type I grating is equal to 1-T where T is the transmission spectra. This means that the reflection and transmission spectra are complementary and there is negligible loss of light by reflection into the cladding or by absorption. Type I gratings are the most commonly used of all grating types, and the only types of grating available off-the-shelf at the time of writing. Type IA gratings Regenerated grating written after erasure of a type I grating in hydrogenated germanosilicate fiber of all types Type IA gratings were first observed in 2001 during experiments designed to determine the effects of hydrogen loading on the formation of IIA gratings in germanosilicate fiber. In contrast to the anticipated decrease (or 'blue shift') of the gratings' Bragg wavelength, a large increase (or 'red shift') was observed. Later work showed that the increase in Bragg wavelength began once an initial type I grating had reached peak reflectivity and begun to weaken. For this reason, it was labeled as a regenerated grating. Determination of the type IA gratings' temperature coefficient showed that it was lower than a standard grating written under similar conditions. The key difference between the inscription of type IA and IIA gratings is that IA gratings are written in hydrogenated fibers, whereas type IIA gratings are written in non-hydrogenated fibers. Type IIA, or type In, gratings These are gratings that form as the negative part of the induced index change overtakes the positive part. It is usually associated with gradual relaxation of induced stress along the axis and/or at the interface. It has been proposed that these gratings could be relabeled type In (for type 1 gratings with a negative index change; type II label could be reserved for those that are distinctly made above the damage threshold of the glass). Later research by Xie et al. showed the existence of another type of grating with similar thermal stability properties to the type II grating. This grating exhibited a negative change in the mean index of the fiber and was termed type IIA. The gratings were formed in germanosilicate fibers with pulses from a frequency doubled XeCl pumped dye laser. It was shown that initial exposure formed a standard (type I) grating within the fiber which underwent a small red shift before being erased. Further exposure showed that a grating reformed which underwent a steady blue shift whilst growing in strength. Regenerated gratings These are gratings that are reborn at higher temperatures after erasure of gratings, usually type I gratings and usually, though not always, in the presence of hydrogen. They have been interpreted in different ways including dopant diffusion (oxygen being the most popular current interpretation) and glass structural change. Recent work has shown that there exists a regeneration regime beyond diffusion where gratings can be made to operate at temperatures in excess of 1,295 °C, outperforming even type II femtosecond gratings. These are extremely attractive for ultra high temperature applications. Type II gratings Damage written gratings inscribed by multiphoton excitation with higher intensity lasers that exceed the damage threshold of the glass. Lasers employed are usually pulsed in order to reach these intensities. They include recent developments in multiphoton excitation using femtosecond pulses where the short timescales (commensurate on a timescale similar to local relaxation times) offer unprecedented spatial localization of the induced change. The amorphous network of the glass is usually transformed via a different ionization and melting pathway to give either higher index changes or create, through micro-explosions, voids surrounded by more dense glass. Archambault et al. showed that it was possible to inscribe gratings of ~100% (>99.8%) reflectance with a single UV pulse in fibers on the draw tower. The resulting gratings were shown to be stable at temperatures as high as 800 °C (up to 1,000 °C in some cases, and higher with femtosecond laser inscription). The gratings were inscribed using a single 40 mJ pulse from an excimer laser at 248 nm. It was further shown that a sharp threshold was evident at ~30 mJ; above this level the index modulation increased by more than two orders of magnitude, whereas below 30 mJ the index modulation grew linearly with pulse energy. For ease of identification, and in recognition of the distinct differences in thermal stability, they labeled gratings fabricated below the threshold as type I gratings and above the threshold as type II gratings. Microscopic examination of these gratings showed a periodic damage track at the grating's site within the fiber [10]; hence type II gratings are also known as damage gratings. However, these cracks can be very localized so as to not play a major role in scattering loss if properly prepared. Grating structure The structure of the FBG can vary via the refractive index, or the grating period. The grating period can be uniform or graded, and either localised or distributed in a superstructure. The refractive index has two primary characteristics, the refractive index profile, and the offset. Typically, the refractive index profile can be uniform or apodized, and the refractive index offset is positive or zero. There are six common structures for FBGs; uniform positive-only index change, Gaussian apodized, raised-cosine apodized, chirped, discrete phase shift, and superstructure. The first complex grating was made by J. Canning in 1994. This supported the development of the first distributed feedback (DFB) fiber lasers, and also laid the groundwork for most complex gratings that followed, including the sampled gratings first made by Peter Hill and colleagues in Australia. Apodized gratings There are basically two quantities that control the properties of the FBG. These are the grating length, , given as and the grating strength, . There are, however, three properties that need to be controlled in a FBG. These are the reflectivity, the bandwidth, and the side-lobe strength. As shown above, in the strong grating limit (i.e., for large ) the bandwidth depends on the grating strength, and not the grating length. This means the grating strength can be used to set the bandwidth. The grating length, effectively , can then be used to set the peak reflectivity, which depends on both the grating strength and the grating length. The result of this is that the side-lobe strength cannot be controlled, and this simple optimisation results in significant side-lobes. A third quantity can be varied to help with side-lobe suppression. This is apodization of the refractive index change. The term apodization refers to the grading of the refractive index to approach zero at the end of the grating. Apodized gratings offer significant improvement in side-lobe suppression while maintaining reflectivity and a narrow bandwidth. The two functions typically used to apodize a FBG are Gaussian and raised-cosine. Chirped fiber Bragg gratings The refractive index profile of the grating may be modified to add other features, such as a linear variation in the grating period, called a chirp. The reflected wavelength changes with the grating period, broadening the reflected spectrum. A grating possessing a chirp has the property of adding dispersion—namely, different wavelengths reflected from the grating will be subject to different delays. This property has been used in the development of phased-array antenna systems and polarization mode dispersion compensation, as well. Tilted fiber Bragg gratings In standard FBGs, the grading or variation of the refractive index is along the length of the fiber (the optical axis), and is typically uniform across the width of the fiber. In a tilted FBG (TFBG), the variation of the refractive index is at an angle to the optical axis. The angle of tilt in a TFBG has an effect on the reflected wavelength, and bandwidth. Long-period gratings Typically the grating period is the same size as the Bragg wavelength, as shown above. For a grating that reflects at 1,500 nm, the grating period is 500 nm, using a refractive index of 1.5. Longer periods can be used to achieve much broader responses than are possible with a standard FBG. These gratings are called long-period fiber grating. They typically have grating periods on the order of 100 micrometers, to a millimeter, and are therefore much easier to manufacture. Phase-shifted fiber Bragg gratings Phase-shifted fiber Bragg gratings (PS-FBGs) are an important class of gratings structures which have interesting applications in optical communications and sensing due to their special filtering characteristics. These types of gratings can be reconfigurable through special packaging and system design. Different coatings of diffractive structure are used for fiber Bragg gratings in order to reduce the mechanical impact on the Bragg wavelength shift for 1.1–15 times as compared to an uncoated waveguide. Addressed fiber Bragg structures Addressed fiber Bragg structures (AFBS) is an emerging class of FBGs developed in order to simplify interrogation and enhance performance of FBG-based sensors. The optical frequency response of an AFBS has two narrowband notches with the frequency spacing between them being in the radio frequency (RF) range. The frequency spacing is called the address frequency of AFBS and is unique for each AFBS in a system. The central wavelength of AFBS can be defined without scanning its spectral response, unlike conventional FBGs that are probed by optoelectronic interrogators. An interrogation circuit of AFBS is significantly simplified in comparison with conventional interrogators and consists of a broadband optical source, an optical filter with a predefined linear inclined frequency response, and a photodetector. Manufacture Fiber Bragg gratings are created by "inscribing" or "writing" systematic (periodic or aperiodic) variation of refractive index into the core of a special type of optical fiber using an intense ultraviolet (UV) source such as a UV laser. Two main processes are used: interference and masking. The method that is preferable depends on the type of grating to be manufactured. Although polymer optic fibers starting gaining research interest in the 2000s, germanium-doped silica fiber is most commonly used. The germanium-doped fiber is photosensitive, which means that the refractive index of the core changes with exposure to UV light. The amount of the change depends on the intensity and duration of the exposure as well as the photosensitivity of the fiber. To write a high reflectivity fiber Bragg grating directly in the fiber the level of doping with germanium needs to be high. However, standard fibers can be used if the photosensitivity is enhanced by pre-soaking the fiber in hydrogen. Interference This was the first method used widely for the fabrication of fiber Bragg gratings and uses two-beam interference. Here the UV laser is split into two beams which interfere with each other creating a periodic intensity distribution along the interference pattern. The refractive index of the photosensitive fiber changes according to the intensity of light that it is exposed to. This method allows for quick and easy changes to the Bragg wavelength, which is directly related to the interference period and a function of the incident angle of the laser light. Sequential writing Complex grating profiles can be manufactured by exposing a large number of small, partially overlapping gratings in sequence. Advanced properties such as phase shifts and varying modulation depth can be introduced by adjusting the corresponding properties of the subgratings. In the first version of the method, subgratings were formed by exposure with UV pulses, but this approach had several drawbacks, such as large energy fluctuations in the pulses and low average power. A sequential writing method with continuous UV radiation that overcomes these problems has been demonstrated and is now used commercially. The photosensitive fiber is translated by an interferometrically controlled airbearing borne carriage. The interfering UV beams are focused onto the fiber, and as the fiber moves, the fringes move along the fiber by translating mirrors in an interferometer. As the mirrors have a limited range, they must be reset every period, and the fringes move in a sawtooth pattern. All grating parameters are accessible in the control software, and it is therefore possible to manufacture arbitrary gratings structures without any changes in the hardware. Photomask A photomask having the intended grating features may also be used in the manufacture of fiber Bragg gratings. The photomask is placed between the UV light source and the photosensitive fiber. The shadow of the photomask then determines the grating structure based on the transmitted intensity of light striking the fiber. Photomasks are specifically used in the manufacture of chirped Fiber Bragg gratings, which cannot be manufactured using an interference pattern. Point-by-point A single UV laser beam may also be used to 'write' the grating into the fiber point-by-point. Here, the laser has a narrow beam that is equal to the grating period. The main difference of this method lies in the interaction mechanisms between infrared laser radiation and dielectric material - multiphoton absorption and tunnel ionization. This method is specifically applicable to the fabrication of long period fiber gratings. Point-by-point is also used in the fabrication of tilted gratings. Production Originally, the manufacture of the photosensitive optical fiber and the 'writing' of the fiber Bragg grating were done separately. Today, production lines typically draw the fiber from the preform and 'write' the grating, all in a single stage. As well as reducing associated costs and time, this also enables the mass production of fiber Bragg gratings. Mass production is in particular facilitating applications in smart structures utilizing large numbers (3000) of embedded fiber Bragg gratings along a single length of fiber. Applications Communications The primary application of fiber Bragg gratings is in optical communications systems. They are specifically used as notch filters. They are also used in optical multiplexers and demultiplexers with an optical circulator, or optical add-drop multiplexer (OADM). Figure 5 shows 4 channels, depicted as 4 colours, impinging onto a FBG via an optical circulator. The FBG is set to reflect one of the channels, here channel 4. The signal is reflected back to the circulator where it is directed down and dropped out of the system. Since the channel has been dropped, another signal on that channel can be added at the same point in the network. A demultiplexer can be achieved by cascading multiple drop sections of the OADM, where each drop element uses an FBG set to the wavelength to be demultiplexed. Conversely, a multiplexer can be achieved by cascading multiple add sections of the OADM. FBG demultiplexers and OADMs can also be tunable. In a tunable demultiplexer or OADM, the Bragg wavelength of the FBG can be tuned by strain applied by a piezoelectric transducer. The sensitivity of a FBG to strain is discussed below in fiber Bragg grating sensors. Fiber Bragg grating sensors As well as being sensitive to strain, the Bragg wavelength is also sensitive to temperature. This means that fiber Bragg gratings can be used as sensing elements in optical fiber sensors. In a FBG sensor, the measurand causes a shift in the Bragg wavelength, . The relative shift in the Bragg wavelength, , due to an applied strain () and a change in temperature () is approximately given by, or, Here, is the coefficient of strain, which is related to the strain optic coefficient . Also, is the coefficient of temperature, which is made up of the thermal expansion coefficient of the optical fiber, , and the thermo-optic coefficient, . Fiber Bragg gratings can then be used as direct sensing elements for strain and temperature. They can also be used as transduction elements, converting the output of another sensor, which generates a strain or temperature change from the measurand, for example fiber Bragg grating gas sensors use an absorbent coating, which in the presence of a gas expands generating a strain, which is measurable by the grating. Technically, the absorbent material is the sensing element, converting the amount of gas to a strain. The Bragg grating then transduces the strain to the change in wavelength. Specifically, fiber Bragg gratings are finding uses in instrumentation applications such as seismology, pressure sensors for extremely harsh environments, and as downhole sensors in oil and gas wells for measurement of the effects of external pressure, temperature, seismic vibrations and inline flow measurement. As such they offer a significant advantage over traditional electronic gauges used for these applications in that they are less sensitive to vibration or heat and consequently are far more reliable. In the 1990s, investigations were conducted for measuring strain and temperature in composite materials for aircraft and helicopter structures. Fiber Bragg gratings used in fiber lasers Recently the development of high power fiber lasers has generated a new set of applications for fiber Bragg gratings (FBGs), operating at power levels that were previously thought impossible. In the case of a simple fiber laser, the FBGs can be used as the high reflector (HR) and output coupler (OC) to form the laser cavity. The gain for the laser is provided by a length of rare earth doped optical fiber, with the most common form using Yb3+ ions as the active lasing ion in the silica fiber. These Yb-doped fiber lasers first operated at the 1 kW CW power level in 2004 based on free space cavities but were not shown to operate with fiber Bragg grating cavities until much later. Such monolithic, all-fiber devices are produced by many companies worldwide and at power levels exceeding 1 kW. The major advantage of these all fiber systems, where the free space mirrors are replaced with a pair of fiber Bragg gratings (FBGs), is the elimination of realignment during the life of the system, since the FBG is spliced directly to the doped fiber and never needs adjusting. The challenge is to operate these monolithic cavities at the kW CW power level in large mode area (LMA) fibers such as 20/400 (20 μm diameter core and 400 μm diameter inner cladding) without premature failures at the intra-cavity splice points and the gratings. Once optimized, these monolithic cavities do not need realignment during the life of the device, removing any cleaning and degradation of fiber surface from the maintenance schedule of the laser. However, the packaging and optimization of the splices and FBGs themselves are non-trivial at these power levels as are the matching of the various fibers, since the composition of the Yb-doped fiber and various passive and photosensitive fibers needs to be carefully matched across the entire fiber laser chain. Although the power handling capability of the fiber itself far exceeds this level, and is possibly as high as >30 kW CW, the practical limit is much lower due to component reliability and splice losses. Process of matching active and passive fibers In a double-clad fiber there are two waveguides – the Yb-doped core that forms the signal waveguide and the inner cladding waveguide for the pump light. The inner cladding of the active fiber is often shaped to scramble the cladding modes and increase pump overlap with the doped core. The matching of active and passive fibers for improved signal integrity requires optimization of the core/clad concentricity, and the MFD through the core diameter and NA, which reduces splice loss. This is principally achieved by tightening all of the pertinent fiber specifications. Matching fibers for improved pump coupling requires optimization of the clad diameter for both the passive and the active fiber. To maximize the amount of pump power coupled into the active fiber, the active fiber is designed with a slightly larger clad diameter than the passive fibers delivering the pump power. As an example, passive fibers with clad diameters of 395-μm spliced to active octagon shaped fiber with clad diameters of 400-μm improve the coupling of the pump power into the active fiber. An image of such a splice is shown, showing the shaped cladding of the doped double-clad fiber. The matching of active and passive fibers can be optimized in several ways. The easiest method for matching the signal carrying light is to have identical NA and core diameters for each fiber. This however does not account for all the refractive index profile features. Matching of the MFD is also a method used to create matched signal carrying fibers. It has been shown that matching all of these components provides the best set of fibers to build high power amplifiers and lasers. Essentially, the MFD is modeled and the resulting target NA and core diameter are developed. The core-rod is made and before being drawn into fiber its core diameter and NA are checked. Based on the refractive index measurements, the final core/clad ratio is determined and adjusted to the target MFD. This approach accounts for details of the refractive index profile which can be measured easily and with high accuracy on the preform, before it is drawn into fiber. See also Bragg's law Dielectric mirror Diffraction Diffraction grating Distributed temperature sensing by fiber optics Hydrogen sensor Long-period fiber grating PHOSFOS project – embedding FBGs in flexible skins Photonic crystal fiber References External links FOSNE - Fibre Optic Sensing Network Europe Bragg gratings in Subsea infrastructure monitoring Fiber optics Diffraction
Fiber Bragg grating
[ "Physics", "Chemistry", "Materials_science" ]
5,232
[ "Crystallography", "Diffraction", "Spectroscopy", "Spectrum (physical sciences)" ]
2,940,858
https://en.wikipedia.org/wiki/Rejuvenation
Rejuvenation is a medical discipline focused on the practical reversal of the aging process. Rejuvenation is distinct from life extension. Life extension strategies often study the causes of aging and try to oppose those causes in order to slow aging. Rejuvenation is the reversal of aging and thus requires a different strategy, namely repair of the damage that is associated with aging or replacement of damaged tissue with new tissue. Rejuvenation can be a means of life extension, but most life extension strategies do not involve rejuvenation. Historical and cultural background Various myths tell the stories about the quest for rejuvenation. It was believed that magic or intervention of a supernatural power can bring back youth and many mythical adventurers set out on a journey to do that, for themselves, their relatives or some authority that sent them anonymously. An ancient Chinese emperor actually sent out ships of young men and women to find a pearl that would rejuvenate him. This led to a myth among modern Chinese that Japan was founded by these people. In some religions, people were to be rejuvenated after death prior to placing them in heaven. The stories continued well into the 16th century. The Spanish explorer Juan Ponce de León led an expedition around the Caribbean islands and into Florida to find the Fountain of Youth. Led by the rumors, the expedition continued the search and many perished. The Fountain was nowhere to be found as locals were unaware of its exact location. Since the emergence of philosophy, sages and self-proclaimed wizards always made enormous efforts to find the secret of youth, both for themselves and for their noble patrons and sponsors. It was widely believed that some potions may restore the youth. Another commonly cited approach was attempting to transfer the essence of youth from young people to old. Some examples of this approach were sleeping with virgins or children (sometimes literally sleeping, not necessarily having sex), bathing in or drinking their blood. The quest for rejuvenation reached its height with alchemy. All around Europe, and also beyond, alchemists were looking for the Philosopher's Stone, the mythical substance that, as it was believed, could not only turn lead into gold, but also prolong life and restore youth. Although the set goal was not achieved, alchemy paved the way to the scientific method and so to the medical advances of today. Serge Abrahamovitch Voronoff was a French surgeon born in Russia who gained fame for his technique of grafting monkey testicle tissue on to the testicles of men while working in France in the 1920s and 1930s. This was one of the first medically accepted rejuvenation therapies (before he was proved to be wrong around 1930–1940). The technique brought him a great deal of money, although he was already independently wealthy. As his work fell out of favor, he went from being a highly respected surgeon to a subject of ridicule. By the early 1930s, over 500 men had been treated in France by his rejuvenation technique, and thousands more around the world, such as in a special clinic set up in Algiers. Noteworthy people who had the surgery included Harold McCormick, chairman of the board of International Harvester Company, and the aging premier of Turkey. Rejuvenation technology and its effects on individuals and society have long been a subject of science fiction. The Misspent Youth and Commonwealth Saga by Peter F. Hamilton are among the most well known examples of this, dealing with the short- and long-term effects of a near perfect 80-year-old to 20-year-old body change with mind intact. The less perfect rejuvenation featured in the Mars trilogy by Kim Stanley Robinson results in long-term memory loss and sheer boredom that comes with extreme age. The post-mortal characters in the Revelation Space series have long-term or essentially infinite lifespans, and sheer boredom induces them to undertake activities of extreme risk. Modern developments Aging is the accumulation of damage to macromolecules, cells, tissues and organs in and on the body which, when it can no longer be tolerated by an organism, ultimately leads to its death. If any of that damage can be repaired, the result is rejuvenation. There have been many experiments which have been shown to increase the maximum life span of laboratory animals, thereby achieving life extension. A few experimental methods such as replacing hormones to youthful levels have had considerable success in partially rejuvenating laboratory animals and humans. A 2011 experiment involved breeding genetically manipulated mice that lacked an enzyme called telomerase, causing the mice to age prematurely and suffer ailments. When the mice were given injections to reactivate the enzyme, it repaired the damaged tissues and reversed the signs of aging. There are at least eight important hormones that decline with age: 1. human growth hormone (HGH); 2. the sexual hormones: testosterone or oestrogen/progesterone; 3. erythropoietin (EPO); 4. insulin; 5. DHEA; 6. melatonin; 7. thyroid; 8. pregnenolone. In theory, if all or some of these hormones are replaced, the body will respond to them as it did when it was younger, thus repairing and restoring many body functions. In line with this, recent experiments show that heterochronic parabiosis, i.e. connecting the circulatory systems of young and old animal, leads to the rejuvenation of the old animal, including restoration of proper stem cell function. Similar experiments show that grafting old muscles into young hosts leads to their complete restoration, whereas grafting young muscles into old hosts does not. These experiments show that aging is mediated by systemic environment, rather than being an intrinsic cell property. Clinical trials based on transfusion of young blood were scheduled to begin in 2014. Another intervention that is gaining popularity is epigenetic reprogramming. Through the use of Yamanaka factors, aged cells can revert to a younger state. It has been demonstrated that reprogramming induces a youthful epigenetic state and can restore vision after injury. Only through reprogramming were stochastic epigenetic variations, which accumulate with age, successfully reversed, as demonstrated by a stochastic data-based clock. Most attempts at genetic repair have traditionally involved the use of a retrovirus to insert a new gene into a random position on a chromosome. But by attaching zinc fingers (which determine where transcription factors bind) to endonucleases (which break DNA strands), homologous recombination can be induced to correct and replace defective (or undesired) DNA sequences. The first applications of this technology are to isolate stem cells from the bone marrow of patients having blood disease mutations, to correct those mutations in laboratory dishes using zinc finger endonucleases and to transplant the stem cells back into the patients. More recent efforts leverage CRISPR-Cas systems or adeno-associated viruses (AAVs). Enhanced DNA repair has been proposed as a potential rejuvenation strategy. See DNA damage theory of aging. Stem cell regenerative medicine uses three different strategies: Implantation of stem cells from culture into an existing tissue structure Implantation of stem cells into a tissue scaffold that guides restoration Induction of residual cells of a tissue structure to regenerate the necessary body part A salamander can not only regenerate a limb, but can regenerate the lens or retina of an eye and can regenerate an intestine. For regeneration the salamander tissues form a blastema by de-differentiation of mesenchymal cells, and the blastema functions as a self-organizing system to regenerate the limb. Yet another option involves cosmetic changes to the individual to create the appearance of youth. These are generally superficial and do little to make the person healthier or live longer, but the real improvement in a person's appearance may elevate their mood and have positive side effects normally correlated with happiness. Cosmetic surgery is a large industry offering treatments such as removal of wrinkles ("face lift"), removal of extra fat (liposuction) and reshaping or augmentation of various body parts (abdomen, breasts, face). There are also, as commonly found throughout history, many fake rejuvenation products that have been shown to be ineffective. Chief among these are powders, sprays, gels, and homeopathic substances that claim to contain growth hormones. Authentic growth hormones are only effective when injected, mainly due to the fact that the 191-amino acid protein is too large to be absorbed through the mucous membranes, and would be broken up in the stomach if swallowed. The Mprize scientific competition is under way to deliver on the mission of extending healthy human life. It directly accelerates the development of revolutionary new life extension therapies by awarding two cash prizes: one to the research team that breaks the world record for the oldest-ever mouse; and one to the team that develops the most successful late-onset rejuvenation. Current Mprize winner for rejuvenation is Steven Spindler. Caloric restriction (CR), the consumption of fewer calories while avoiding malnutrition, was applied as a robust method of decelerating aging and the development of age-related diseases. Strategies for engineered negligible senescence The biomedical gerontologist Aubrey de Grey has initiated a project, strategies for engineered negligible senescence (SENS), to study how to reverse the damage caused by aging. He has proposed seven strategies for what he calls the seven deadly sins of aging: Cell loss can be repaired (reversed) just by suitable exercise in the case of muscle. For other tissues it needs various growth factors to stimulate cell division, or in some cases it needs stem cells. Senescent cells can be removed by activating the immune system against them. Or they can be destroyed by gene therapy to introduce "suicide genes" that only kill senescent cells. Protein cross-linking can largely be reversed by drugs that break the links. But to break some of the cross-links we may need to develop enzymatic methods. Extracellular garbage (like amyloid) can be eliminated by vaccination that gets immune cells to "eat" the garbage. For intracellular junk we need to introduce new enzymes, possibly enzymes from soil bacteria, that can degrade the junk (lipofuscin) that our own natural enzymes cannot degrade. For mitochondrial mutations the plan is not to repair them but to prevent harm from the mutations by putting suitably modified copies of the mitochondrial genes into the cell nucleus by gene therapy. The mitochondrial DNA experiences a high degree of mutagenic damage because most free radicals are generated in the mitochondria. A copy of the mitochondrial DNA located in the nucleus will be better protected from free radicals, and there will be better DNA repair when damage occurs. All mitochondrial proteins would then be imported into the mitochondria. For cancer (the most lethal consequence of mutations) the strategy is to use gene therapy to delete the genes for telomerase and to eliminate telomerase-independent mechanisms of turning normal cells into "immortal" cancer cells. To compensate for the loss of telomerase in stem cells we would introduce new stem cells every decade or so. In 2009, Aubrey de Grey co-founded the SENS Foundation to expedite progress in the above-listed areas. Scientific journal Rejuvenation Research Editor: Aubrey de Grey. Publisher: Mary Ann Liebert, Inc. ISSN 1549-1684 – Published Bimonthly. See also Aging brain American Academy of Anti-Aging Medicine Anti-aging movement Biogerontology Biological immortality DNA repair DNA damage theory of aging Eternal youth Facial rejuvenation Fountain of Youth Hayflick Hayflick limit Immortality Indefinite lifespan Kayakalpa Life extension Maximum life span Nanomedicine Photorejuvenation SAGE KE Senescence Shunamitism Telomere Telomerase Tissue engineering Therapeutic cloning References External links Life extension Transhumanism Concepts in alternative medicine Senescence
Rejuvenation
[ "Chemistry", "Technology", "Engineering", "Biology" ]
2,490
[ "Genetic engineering", "Transhumanism", "Senescence", "Cellular processes", "Ethics of science and technology", "Metabolism" ]
2,940,886
https://en.wikipedia.org/wiki/Hydroperoxyl
The hydroperoxyl radical, also known as the hydrogen superoxide, is the protonated form of superoxide with the chemical formula HO2, also written HOO•. This species plays an important role in the atmosphere and as a reactive oxygen species in cell biology. Structure and reactions The molecule has a bent structure. The superoxide anion, , and the hydroperoxyl radical exist in equilibrium in aqueous solution: + + The pKa of HO2 is 4.88. Therefore, about 0.3% of any superoxide present in the cytosol of a typical cell is in the protonated form. It oxidizes nitric oxide to nitrogen dioxide: + → + Reactive oxygen species in biology Together with its conjugate base superoxide, hydroperoxyl is an important reactive oxygen species. Unlike , which has reducing properties, can act as an oxidant in a number of biologically important reactions, such as the abstraction of hydrogen atoms from tocopherol and polyunstaturated fatty acids in the lipid bilayer. As such, it may be an important initiator of lipid peroxidation. Importance for atmospheric chemistry Gaseous hydroperoxyl is involved in reaction cycles that destroy stratospheric ozone. It is also present in the troposphere, where it is essentially a byproduct of the oxidation of carbon monoxide and of hydrocarbons by the hydroxyl radical. Because dielectric constant has a strong effect on pKa, and the dielectric constant of air is quite low, superoxide produced (photochemically) in the atmosphere is almost exclusively present as . As hydroperoxyl is quite reactive, it acts as a "cleanser" of the atmosphere by degrading certain organic pollutants. As such, the chemistry of is of considerable geochemical importance. References Free radicals Oxoacids Reactive oxygen species Superoxides
Hydroperoxyl
[ "Chemistry", "Biology" ]
400
[ "Senescence", "Free radicals", "Biomolecules" ]
2,940,900
https://en.wikipedia.org/wiki/Carbonitriding
Carbonitriding is a metallurgical surface modification technique that is used to increase the surface hardness of a metal, thereby reducing wear. During the process, atoms of carbon and nitrogen diffuse interstitially into the metal, creating barriers to slip, increasing the hardness and modulus near the surface. Carbonitriding is often applied to inexpensive, easily machined low carbon steel to impart the surface properties of more expensive and difficult to work grades of steel. Surface hardness of carbonitrided parts ranges from 55 to 62 HRC. Certain pre-industrial case hardening processes include not only carbon-rich materials such as charcoal, but nitrogen-rich materials such as urea, which implies that traditional surface hardening techniques were a form of carbonitriding. Process Carbonitriding is similar to gas carburization with the addition of ammonia to the carburizing atmosphere, which provides a source of nitrogen. Nitrogen is absorbed at the surface and diffuses into the workpiece along with carbon. Carbonitriding (around 850 °C / 1550 °F) is carried out at temperatures substantially higher than plain nitriding (around 530 °C / 990 °F) but slightly lower than those used for carburizing (around 950 °C / 1700 °F) and for shorter times. Carbonitriding tends to be more economical than carburizing, and also reduces distortion during quenching. The lower temperature allows oil quenching, or even gas quenching with a protective atmosphere. Characteristics of carbonitrided parts Carbonitriding forms a hard, wear-resistant case, is typically 0.07 mm to 0.5 mm thick, and generally has higher hardness than a carburized case. Case depth is tailored to the application; a thicker case increases the wear life of the part. Carbonitriding alters only the top layers of the workpiece; and does not deposit an additional layer, so the process does not significantly alter the dimensions of the part. Maximum case depth is typically restricted to 0.75 mm; case depths greater than this take too long to diffuse to be economical. Shorter processing times are preferred to restrict the concentration of nitrogen in the case, as nitrogen addition is more difficult to control than carbon. An excess of nitrogen in the work piece can cause high levels of retained austenite and porosity, which are undesirable in producing a part of high hardness. Advantages Carbonitriding also has other advantages over carburizing. To begin, it has a greater resistance to softening during tempering and increased fatigue and impact strength. It is possible to use both carbonitriding and carburizing together to form optimum conditions of deeper case depths and therefore performance of the part in industry. This method is applied particularly to steels with low case hardenability, such as the seat of the valve. The process applied is initially carburizing to the required case depth (up to 2.5 mm) at around 900-955°C, and then carbonitriding to achieve required carbonitrided case depth. The parts are then oil quenched, and the resulting part has a harder case than possibly achieved for carburization, and the addition of the carbonitrided layer increases the residual compressive stresses in the case such that the contact fatigue resistance and strength gradient are both increased. Studies are showing that carbonitriding improves corrosion resistance. Applications Typical applications for case hardening are gear teeth, cams, shafts, bearings, fasteners, pins, hydraulic piston rods, automotive clutch plates, tools, dies and tillage tools. See also Differential hardening Nitridization Quench polish quench Surface engineering References Metal heat treatments ru:Нитроцементация сталей
Carbonitriding
[ "Chemistry" ]
772
[ "Metallurgical processes", "Metal heat treatments" ]
2,940,943
https://en.wikipedia.org/wiki/Ball%20detent
A ball detent is a simple mechanical arrangement used to hold a moving part in a temporarily fixed position relative to another part. Usually the moving parts slide with respect to each other, or one part rotates within the other. The ball is a single, usually metal sphere, sliding within a bored cylinder, against the pressure of a spring, which pushes the ball against the other part of the mechanism, which carries the detent - which can be as simple as a hole of smaller diameter than the ball. When the hole is in line with the cylinder, the ball is partially pushed into the hole under spring pressure, holding the parts at that position. Additional force applied to the moving parts will compress the spring, causing the ball to be depressed back into its cylinder, and allowing the parts to move to another position. Applications Ball detents are commonly found in the selector mechanism of a gearbox, holding the selector rods in the correct position to engage the desired gear. Other applications include clutches that slip at a preset torque, and calibrated ball detent mechanisms are typically found in a torque wrench. Ball detents are one of the mechanisms often used in folding knives to prevent unwanted opening of the blade when carrying. Ball detents were used in the Curta mechanical calculator to enforce discrete values. Use in paintball markers The term "ball detent" is also used when referring to a mechanism in paintball markers designed to prevent the paintball from rolling out of the firing chamber before being fired. Some designs are similar to those outlined above, with a cartridge utilizing a ball bearing in a bore with spring pressure. The cartridge is installed perpendicular to the barrel bore axis, just ahead of where the ball rests before being fired. Other designs use elastic rubber protrusions that block the ball until it is pushed over it by the bolt. Some designs use precisely calibrated rings or "barrel sizers" that are selected to have a slightly smaller inner diameter than the outer diameter of the paintballs being used. They rely on simple constriction of the bore to prevent paintballs from rolling through them from the force of gravity. When the marker is fired, the air pressure pushes the ball through the bore, causing it to compress enough to pass through. Paintballs have varying diameters depending on a number of factors; this type of ball detent must be sized correctly to avoid compressing the paintball too much, causing it to burst. If too large of a sizer is selected, balls may roll through it. The cartridge and elastic rubber protrusion-type detents are primarily used for open-bolt markers, or on closed-bolt markers to prevent double feeding (feeding more than one ball when the bolt is open for loading). Closed-bolt markers generally use the constriction method to prevent "roll outs", a malfunction where the ball completely rolls out of the barrel, causing no paintball to be fired when the trigger is pulled. A partial roll out is when the ball rolls partially through the barrel, causing reduced velocity. See also Spring plunger References Mechanical engineering
Ball detent
[ "Physics", "Engineering" ]
631
[ "Applied and interdisciplinary physics", "Mechanical engineering" ]
2,940,971
https://en.wikipedia.org/wiki/Scatchard%20equation
The Scatchard equation is an equation used in molecular biology to calculate the affinity and number of binding sites of a receptor for a ligand. It is named after the American chemist George Scatchard. Equation Throughout this article, [RL] denotes the concentration of a receptor-ligand complex, [R] the concentration of free receptor, and [L] the concentration of free ligand (so that the total concentration of the receptor and ligand are [R]+[RL] and [L]+[RL], respectively). Let n be the number of binding sites for ligand on each receptor molecule, and let represent the average number of ligands bound to a receptor. Let Kd denote the dissociation constant between the ligand and receptor. The Scatchard equation is given by By plotting /[L] versus , the Scatchard plot shows that the slope equals to -1/Kd while the x-intercept equals the number of ligand binding sites n. Derivation n=1 Ligand When each receptor has a single ligand binding site, the system is described by with an on-rate (kon) and off-rate (koff) related to the dissociation constant through Kd=koff/kon. When the system equilibrates, so that the average number of ligands bound to each receptor is given by which is the Scatchard equation for n=1. n=2 Ligands When each receptor has two ligand binding sites, the system is governed by At equilibrium, the average number of ligands bound to each receptor is given by which is equivalent to the Scatchard equation. General Case of n Ligands For a receptor with n binding sites that independently bind to the ligand, each binding site will have an average occupancy of [L]/(Kd + [L]). Hence, by considering all n binding sites, there will ligands bound to each receptor on average, from which the Scatchard equation follows. Problems with the method The Scatchard method is less used nowadays because of the availability of computer programs that directly fit parameters to binding data. Mathematically, the Scatchard equation is related to Eadie-Hofstee method, which is used to infer kinetic properties from enzyme reaction data. Many modern methods for measuring binding such as surface plasmon resonance and isothermal titration calorimetry provide additional binding parameters that are globally fit by computer-based iterative methods. References Further reading lecture with derivation (Archived version at web.archive.org) Biochemistry methods Proteins
Scatchard equation
[ "Chemistry", "Biology" ]
536
[ "Biochemistry methods", "Biomolecules by chemical classification", "Molecular biology", "Biochemistry", "Proteins" ]
2,941,110
https://en.wikipedia.org/wiki/Napoleon%27s%20problem
Napoleon's problem is a compass construction problem. In it, a circle and its center are given. The challenge is to divide the circle into four equal arcs using only a compass. Napoleon was known to be an amateur mathematician, but it is not known if he either created or solved the problem. Napoleon's friend the Italian mathematician Lorenzo Mascheroni introduced the limitation of using only a compass (no straight edge) into geometric constructions. But actually, the challenge above is easier than the real Napoleon's problem, consisting in finding the center of a given circle with compass alone. The following sections will describe solutions to three problems and proofs that they work. Georg Mohr's 1672 book "Euclides Danicus" anticipated Mascheroni's idea, though the book was only rediscovered in 1928. Dividing a given circle into four equal arcs given its centre Centred on any point X on circle C, draw an arc through O (the centre of C) which intersects C at points V and Y. Do the same centred on Y through O, intersecting C at X and Z. Note that the line segments OV, OX, OY, OZ, VX, XY, YZ have the same length, all distances being equal to the radius of the circle C. Now draw an arc centred on V which goes through Y and an arc centred on Z which goes through X; call where these two arcs intersect T. Note that the distances VY and XZ are times the radius of the circle C. Put the compass radius equal to the distance OT ( times the radius of the circle C) and draw an arc centred on Z which intersects the circle C at U and W. UVWZ is a square and the arcs of C UV, VW, WZ, and ZU are each equal to a quarter of the circumference of C. Finding the centre of a given circle Let (C) be the circle, whose centre is to be found. Let A be a point on (C). A circle (C1) centered at A meets (C) at B and B'. Two circles (C2) centered at B and B', with radius AB, cross again at point C. A circle (C3) centered at C with radius AC meets (C1) at D and D'. Two circles (C4) centered at D and D' with radius AD meet at A, and at O, the sought center of (C). Note: for this to work the radius of circle (C1) must be neither too small nor too large. More precisely, this radius must be between half and double of the radius of (C): if the radius is greater than the diameter of (C), (C1) will not intersect (C); if the radius is shorter than half the radius of (C), point C will be between A and O and (C3) will not intersect (C1). Proof The idea behind the proof is to construct, with compass alone, the length b²/a when lengths a and b are known, and a/2 ≤ b ≤ 2a. In the figure on the right, a circle of radius a is drawn, centred at O; on it a point A is chosen, from which points B and B' can be determined such that AB and AB' have a length of b. Point A' lies opposite A, but does not need to be constructed (it would require a straightedge); similarly point H is the (virtual) intersection of AA' and BB'. Point C can be determined from B and B', using circles of radius b. Triangle ABA' has a right angle at B and BH is perpendicular to AA', so : Therefore, and AC = b²/a. In the above construction of the center, such a configuration appears twice : points A, B and B' are on the circle (C), radius a = r ; AB, AB', BC, and B'C are equal to b = R, so ; points A, D and D' are on the circle of centre C, radius ; DA, D'A, DO, and D'O are equal to b = R, so . Therefore, O is the centre of circle (C). Finding the middle of a given distance or of a line segment Let |AD| be the distance, whose centre is to be found. Two circles (C1) centered at A and (C2) centered at D with radius |AD| meet at B and B'. A circle (C3) centered at B' with radius |B'B| meets the circle (C2) at A'. A circle (C4) centered at A' with radius |A'A| meets the circle (C1) at E and E'. Two circles (C5) centered at E and (C6) centered at E' with radius |EA| meet at A and O. O is the sought center of |AD|. The design principle can also be applied to a line segment . The proof described above is also applicable for this design. Note: Point A in design is equivalent to A in proof. Therefore radius: (C2) ≙ (C) and points: O ≙ H, B ≙ B, D ≙ O and A' ≙ A'. See also Mohr–Mascheroni theorem Napoleon's theorem Napoleon points References Euclidean plane geometry Mathematical problems Articles containing proofs Problem
Napoleon's problem
[ "Mathematics" ]
1,122
[ "Mathematical problems", "Articles containing proofs", "Planes (geometry)", "Euclidean plane geometry" ]
2,941,182
https://en.wikipedia.org/wiki/Japanese%20input%20method
Japanese input methods are used to input Japanese characters on a computer. There are two main methods of inputting Japanese on computers. One is via a romanized version of Japanese called rōmaji (literally "Roman character"), and the other is via keyboard keys corresponding to the Japanese kana. Some systems may also work via a graphical user interface, or GUI, where the characters are chosen by clicking on buttons or image maps. Japanese keyboards Japanese keyboards (as shown on the second image) have both hiragana and Roman letters indicated. The JIS, or Japanese Industrial Standard, keyboard layout keeps the Roman letters in the English QWERTY layout, with numbers above them. Many of the non-alphanumeric symbols are the same as on English-language keyboards, but some symbols are located in other places. The hiragana symbols are also ordered in a consistent way across different keyboards. For example, the , , , , , keys correspond to (ta, te, i, su, ka, and n) respectively when the computer is used for direct hiragana input. Input keys Since Japanese input requires switching between Roman and hiragana entry modes, and also conversion between hiragana and kanji (as discussed below), there are usually several special keys on the keyboard. This varies from computer to computer, and some OS vendors have striven to provide a consistent user interface regardless of the type of keyboard being used. On non-Japanese keyboards, option- or control- key sequences can perform all of the tasks mentioned below. On most Japanese keyboards, one key switches between Roman characters and Japanese characters. Sometimes, each mode (Roman and Japanese) may even have its own key, in order to prevent ambiguity when the user is typing quickly. There may also be a key to instruct the computer to convert the latest hiragana characters into kanji, although usually the space key serves the same purpose since Japanese writing doesn't use spaces. Keyboards with multiple forms of writing may have a mode key to switch between them. Hiragana, katakana, halfwidth katakana, halfwidth Roman letters, and fullwidth Roman letters are some of the options. A typical Japanese character is square while Roman characters are typically variable in width. Since all Japanese characters occupy the space of a square box, it is sometimes desirable to input Roman characters in the same square form in order to preserve the grid layout of the text. These Roman characters that have been fitted to a square character cell are called fullwidth, while the normal ones are called halfwidth. In some fonts these are fitted to half-squares, like some monospaced fonts, while in others they are not. Often, fonts are available in two variants, one with the halfwidth characters monospaced, and another one with proportional halfwidth characters. The name of the typeface with proportional halfwidth characters is often prefixed with "P" for "proportional". Finally, a keyboard may have a special key to tell the OS that the last kana entered should not be converted to kanji. Sometimes this is just the Return/Enter key. In Microsoft Windows platforms, change a physical keyboard from English US keyboard (101 keys) to Japanese keyboard (106 keys), or change a physical keyboard from Japanese keyboard to English US keyboard, may need to modify the Registry, to ensure symbols like @ can be input correctly. Thumb-shift keyboards A thumb-shift keyboard is an alternative design, popular among professional Japanese typists. Like a standard Japanese keyboard, it has hiragana characters marked in addition to Latin letters, but the layout is completely different. Most letter keys have two kana characters associated with them, which allows all the characters to fit in three rows, like in Western layouts. In the place of the space bar key on a conventional keyboard, there are two additional modifier keys, operated with thumbs - one of them is used to enter the alternate character marked, and the other is used for voiced sounds. The semi-voiced sounds are entered using either the conventional shift key operated by the little finger, or take place of the voiced sound for characters not having a voiced variant. The kana-to-kanji conversion is done in the same way as when using any other type of keyboard. There are dedicated conversion keys on some designs, while on others the thumb shift keys double as such. Rōmaji input As an alternative to direct input of kana, a number of Japanese input method editors allow Japanese text to be entered using rōmaji, which can then be converted to kana or kanji. This method does not require the use of a Japanese keyboard with kana markings. Mobile phones Keitai input The primary system used to input Japanese on earlier generations of mobile phones is based on the numerical keypad. Each number is associated with a particular sequence of kana, such as ka, ki, ku, ke, ko for '2', and the button is pressed repeatedly to get the correct kana—each key corresponds to a column in the gojūon (5 row × 10 column grid of kana), while the number of presses determines the row. Dakuten and handakuten marks, punctuation, and other symbols can be added by other buttons in the same way. Kana to kanji conversion is done via the arrow and other keys. Flick input Flick input is a Japanese input method used on smartphones. The key layout is the same as the Keitai input, but rather than pressing a key repeatedly, the user can swipe from the key in a certain direction to produce the desired character. Japanese smartphone IMEs such as Google Japanese Input, POBox and S-Shoin all support flick input. Godan layout In addition to the industry standard QWERTY and 12 key layouts, Google Japanese Input offers a 15-key Godan keyboard layout, which is an alphabet layout optimized for romaji input. The letters fit in a five rows by three columns grid. The left column consists of the five vowels, in the same order as the columns in the Gojūon table (a, i, u, e, o), while the central and right column consists of letters for the nine main voiceless consonants of kanas, in the same order as the rows in the Gojūon table (k, s, t, n, [special]; h, m, y, r, w). Other characters are typed by flick gesture: The other twelve Latin consonants not needed for composing kanas (b, c, d, f, g, j, l, p, q, v, x, z) are composed on the voiceless consonants by swiping them up, or right, or even left (swiping k for q or g; swiping s for j or z; swiping t for c or d; swiping h for f, b or p; swiping m for l; swiping y for x; swiping w for v). The main voiced kanas are composed like in romaji, by typing (without swiping) the voiceless consonant on the two last columns, then swiping the vowel on the first column. The other voiced kanas letters (with handakuon or small forms) are composed by typing the voiceless consonant, then swiping the vowel, then swiping the [special] key (in the middle of the last row) to select the handakuon (swipe to the left or right) or small kana forms (swipe up). Small kana can be written by swiping to l or x, and then writing the wanted letter, e.g. inputs fa and hu/fu, then la/xa both give out fa, as in Famikon. Decimal digits are composed by swiping down the keys located on the first 3 rows (digits 1 to 9) or the middle of the fourth row (digit 0). The four main punctuation signs are composed by swiping r at end of the fourth row (swipe down for comma, left for the full stop, up for the question mark, right for the exclamation mark). Other signs or input controls may be composed by typing or swiping the other unused positions of other keys. But the tactile version of the layout adds keys in two additional columns for typing space, Enter, Backspace, moving the input cursor to the left or right, converting the previous character between hiragana and katakana, and selecting other input modes. Writing just c gives out when written with a, u and o respectively, and when with i and e, respectively. To write a sokuon before , the inputs WITH this character are: lt(s)u/xt(s)u, ti/chi. The input tchi doesn't work. [Special] consists of ゛, ゜ and (dakuten, handakuten, small). Unlike the 12-key input, repeating a key in Godan is not interpreted as a gesture to cycle through kana with different vowels, but rather it would be interpreted as a repeated romaji letter behaving the same as in the QWERTY layout mode. Other Other consumer devices in Japan which allow for text entry via on-screen programming, such as digital video recorders and video game consoles, allow the user to toggle between the numerical keypad and a full keyboard (QWERTY, or ABC order) input system. Kana to kanji conversion After the kana have been input, they are either left as they are, or converted into kanji (Chinese characters). The Japanese language has many homophones, and conversion of a kana spelling (representing the pronunciation) into a kanji (representing the standard written form of the word) is often a one-to-many process. The kana to kanji converter offers a list of candidate kanji writings for the input kana, and the user may use the space bar or arrow keys to scroll through the list of candidates until they reach the correct writing. On reaching the correct written form, pressing the Enter key, or sometimes the "henkan" key, ends the conversion process. This selection can also be controlled through the GUI with a mouse or other pointing device. If the hiragana is required, pressing the Enter key immediately after the characters are entered will end the conversion process and results in the hiragana as typed. If katakana is required, it is usually presented as an option along with the kanji choices. Alternatively, on some keyboards, pressing the button switches between katakana and hiragana. Sophisticated kana to kanji converters (known collectively as input method editors, or IMEs), allow conversion of multiple kana words into kanji at once, freeing the user from having to do a conversion at each stage. The user can convert at any stage of input by pressing the space bar or henkan button, and the converter attempts to guess the correct division of words. Some IME programs display a brief definition of each word in order to help the user choose the correct kanji. Sometimes the kana to kanji converter may guess the correct kanji for all the words, but if it does not, the cursor (arrow) keys may be used to move backwards and forwards between candidate words, or digit keys can be used to select one of them directly (without pressing cursor keys multiple times and pressing Enter to confirm the choice). If the selected word boundaries are incorrect, the word boundaries can be moved using the control key (or shift key, e.g. on iBus-Anthy) plus the arrow keys. Learning systems Modern systems learn the user's preferences for conversion and put the most recently selected candidates at the top of the conversion list, and also remember which words the user is likely to use when considering word boundaries. Predictive systems The systems used on mobile phones go even further, and try to guess entire phrases or sentences. After a few kana have been entered, the phone automatically offers entire phrases or sentences as possible completion candidates, jumping beyond what has been input. This is usually based on words sent in previous messages. See also Japanese language and computers Japanese typewriter List of input methods for Unix platforms Wāpuro rōmaji Wabun Code – Japanese Morse code List of CJK fonts Chinese input methods for computers Wnn Kotoeri ATOK Google Japanese Input Arabic keyboard References External links Microsoft (Office) IME CJK for Windows XP thru' Windows 7, see Windows#Multilingual support Online Japanese Virtual Keyboard Ajax IME: Web-based Japanese Input Method LiteType: Japanese Interactive Virtual Keyboard How to change between Japanese input methods (direct kana to rōmaji input) on Windows operating systems: ローマ字入力・ひらがな入力切替方法 Input methods CJK input methods Keyboard layouts Input methods
Japanese input method
[ "Technology" ]
2,690
[ "Input methods", "Natural language and computing" ]
2,941,264
https://en.wikipedia.org/wiki/Hypervitaminosis%20A
Hypervitaminosis A refers to the toxic effects of ingesting too much preformed vitamin A (retinyl esters, retinol, and retinal). Symptoms arise as a result of altered bone metabolism and altered metabolism of other fat-soluble vitamins. Hypervitaminosis A is believed to have occurred in early humans, and the problem has persisted throughout human history. Toxicity results from ingesting too much preformed vitamin A from foods (such as liver), supplements, or prescription medications and can be prevented by ingesting no more than the recommended daily amount. Diagnosis can be difficult, as serum retinol is not sensitive to toxic levels of vitamin A, but there are effective tests available. Hypervitaminosis A is usually treated by stopping intake of the offending food(s), supplement(s), or medication. Most people make a full recovery. High intake of provitamin carotenoids (such as beta-carotene) from vegetables and fruits does not cause hypervitaminosis A. Signs and symptoms Symptoms may include: Changes in consciousness Decreased appetite Dizziness Vision changes, double vision (in young children) Drowsiness Headache Irritability Nausea Vomiting Signs Poor weight gain (in infants and children) Skin and hair changes Cracking at corners of the mouth Hair loss (alopecia) Higher sensitivity to sunlight Swelling of lips (cheilitis) Dryness of lips, mouth, eyes, and inside the nose Skin peeling, itching Yellow discoloration of the skin (aurantiasis cutis) Abnormal softening of the skull bone (craniotabes in infants and children) Blurred vision Bone pain or swelling Bulging fontanelle (in infants) Gastric mucosal calcinosis Heart valve calcification Hypercalcemia Increased intracranial pressure manifesting as cerebral edema, papilledema, and headache (may be referred to as idiopathic intracranial hypertension) Liver damage Premature epiphyseal closure Spontaneous fracture Uremic pruritus Causes Hypervitaminosis A results from excessive intake of preformed vitamin A. Genetic variations in tolerance to vitamin A intake may occur, so the toxic dose will not be the same for everyone. Children are particularly sensitive to vitamin A, with daily intakes of 1500 IU/kg body weight reportedly leading to toxicity. Types of vitamin A It is "largely impossible" for provitamin carotenoids, such as beta-carotene, to cause toxicity, as their conversion to retinol is highly regulated. No vitamin A toxicity has ever been reported from ingestion of excessive amounts. Overconsumption of beta-carotene can only cause carotenosis, a harmless and reversible cosmetic condition in which the skin turns orange. Preformed vitamin A absorption and storage in the liver occur very efficiently until a pathologic condition develops. When ingested, 70–90% of preformed vitamin A is absorbed and used. Sources of toxicity Diet – Liver is high in vitamin A. The liver of certain animals, including the polar bear, bearded seal, fish and walrus, are particularly toxic (see ). It has been estimated that consumption of of polar bear liver would result in a toxic dose for a human. Supplements – Dietary supplements can be toxic when taken above recommended dosages. Types of toxicity Acute toxicity occurs over a period of hours or a few days, and is less of a problem than chronic toxicity. Chronic toxicity results from daily intakes greater than 25,000 IU for 6 years or longer and more than 100,000 IU for 6 months or longer. Mechanism Retinol is absorbed and stored in the liver very efficiently until a pathologic condition develops. Delivery to tissues Absorption When ingested, 70–90% of preformed vitamin A is absorbed and used. According to a 2003 review, water-miscible, emulsified, and solid forms of vitamin A supplements are more toxic than oil-based supplement and liver sources. Storage Eighty to ninety percent of the total body reserves of preformed vitamin A are in the liver (with 80–90% of this amount being stored in hepatic stellate cells and the remaining 10–20% being stored in hepatocytes). Fat is another significant storage site, while the lungs and kidneys may also be capable of storage. Transport Until recently, it was thought that the sole important retinoid delivery pathway to tissues involved retinol bound to retinol-binding protein (RBP4). More recent findings, however, indicate that retinoids can be delivered to tissues through multiple overlapping delivery pathways, involving chylomicrons, very low-density lipoprotein (VLDL) and low-density lipoprotein (LDL), retinoic acid bound to albumin, water-soluble β-glucuronides of retinol and retinoic acid, and provitamin A carotenoids. The range of serum retinol concentrations under normal conditions is 1–3 μmol/L. Elevated amounts of retinyl ester (i.e., >10% of total circulating vitamin A) in the fasting state have been used as markers for chronic hypervitaminosis A in humans. Candidate mechanisms for this increase include decreased hepatic uptake of vitamin A and the leaking of esters into the bloodstream from saturated hepatic stellate cells. Effects Effects include increased bone turnover and altered metabolism of fat-soluble vitamins. More research is needed to fully elucidate the effects. Increased bone turnover Retinoic acid suppresses osteoblast activity and stimulates osteoclast formation in vitro, resulting in increased bone resorption and decreased bone formation. It is likely to exert this effect by binding to specific nuclear receptors (members of the retinoic acid receptor or retinoid X receptor nuclear transcription family) which are found in every cell (including osteoblasts and osteoclasts). This change in bone turnover is likely to be the reason for numerous effects seen in hypervitaminosis A, such as hypercalcemia and numerous bone changes such as bone loss that potentially leads to osteoporosis, spontaneous bone fractures, altered skeletal development in children, skeletal pain, radiographic changes, and bone lesions. Altered fat-soluble vitamin metabolism Preformed vitamin A is fat-soluble and high levels have been reported to affect metabolism of the other fat-soluble vitamins D, E, and K. The toxic effects of preformed vitamin A might be related to altered vitamin D metabolism, concurrent ingestion of substantial amounts of vitamin D, or binding of vitamin A to receptor heterodimers. Antagonistic and synergistic interactions between these two vitamins have been reported, as they relate to skeletal health. Stimulation of bone resorption by vitamin A has been reported to be independent of its effects on vitamin D. Mitochondrial toxicity Preformed vitamin A and retinoids exerts several toxic effects regarding redox environment and mitochondrial function. Diagnosis Retinol concentrations are nonsensitive indicators Assessing vitamin A status in persons with subtoxicity or toxicity is complicated because serum retinol concentrations are not sensitive indicators in this range of liver vitamin A reserves. The range of serum retinol concentrations under normal conditions is 1–3 μmol/L and, because of homeostatic regulation, that range varies little with widely disparate vitamin A intakes. Retinol esters have been used as markers Retinyl esters can be distinguished from retinol in serum and other tissues and quantified with the use of methods such as high-performance liquid chromatography. Elevated amounts of retinyl ester (i.e., >10% of total circulating vitamin A) in the fasting state have been used as markers for chronic hypervitaminosis A in humans and monkeys. This increased retinyl ester may be due to decreased hepatic uptake of vitamin A and the leaking of esters into the bloodstream from saturated hepatic stellate cells. Prevention Hypervitaminosis A can be prevented by not ingesting more than the US Institute of Medicine Daily Tolerable Upper Level of intake for Vitamin A. This level is for synthetic and natural retinol ester forms of vitamin A. Carotene forms from dietary sources are not toxic. Possible pregnancy, liver disease, high alcohol consumption, and smoking are indications for close monitoring and limitation of vitamin A administration. Daily tolerable upper level Treatment Stopping high vitamin A intake is the standard treatment. Most people fully recover. Phosphatidylcholine (in the form of PPC or DLPC), the substrate for lecithin retinol acyltransferase, which converts retinol into retinyl esters (the storage forms of vitamin A). Vitamin E may alleviate hypervitaminosis A. Liver transplantation may be a valid option if no improvement occurs. If liver damage has progressed into fibrosis, synthesizing capacity is compromised and supplementation can replenish PC. However, recovery is dependent on removing the causative agent: halting high vitamin A intake. History Vitamin A toxicity is known to be an ancient phenomenon; fossilized skeletal remains of early humans suggest bone abnormalities may have been caused by hypervitaminosis A, as observed in a fossilised leg bone of an individual of Homo erectus, which bears abnormalities similar to those observed in people suffering from an overdose of Vitamin A in the present day. Vitamin A toxicity has long been known to the Inuit as they will not eat the liver of polar bears or bearded seals due to them containing dangerous amounts of Vitamin A. It has been known to Europeans since at least 1597 when Gerrit de Veer wrote in his diary that, while taking refuge in the winter in Nova Zemlya, he and his men became severely ill after eating polar bear liver. In 1913, Antarctic explorers Douglas Mawson and Xavier Mertz were both poisoned (and Mertz died) from eating the livers of their sled dogs during the Far Eastern Party. Another study suggests, however, that exhaustion and diet change are more likely to have caused the tragedy. Other animals Some Arctic animals demonstrate no signs of hypervitaminosis A despite having 10–20 times the level of vitamin A in their livers as other Arctic animals. These animals are top predators and include the polar bear, Arctic fox, bearded seal, and glaucous gull. This ability to efficiently store higher amounts of vitamin A may have contributed to their survival in the extreme environment of the Arctic. Treatment These treatments have been used to help treat or manage toxicity in animals. Although not considered part of standard treatment, they might be of some benefit to humans. Vitamin E appears to be an effective treatment in rabbits, and prevents side effects in chicks Taurine significantly reduces toxic effects in rats. Retinoids can be conjugated by taurine and other substances. Significant amounts of retinotaurine are excreted in the bile, and this retinol conjugate is thought to be an excretory form, as it has little biological activity. Red yeast rice ("cholestin") – significantly reduces toxic effects in rats. Vitamin K prevents hypoprothrombinemia in rats and can sometimes control the increase in plasma/cell ratios of vitamin A. See also Vitamin poisoning Far Eastern Party Retinoic acid syndrome Piblokto References External links Facts about Vitamin A and Carotenoids, from the National Institutes of Health's Office of Dietary Supplements. Effects of external causes Hypervitaminosis
Hypervitaminosis A
[ "Chemistry" ]
2,413
[ "Vitamin A", "Biomolecules" ]
2,941,387
https://en.wikipedia.org/wiki/Restriction%20fragment
A restriction fragment is a DNA fragment resulting from the cutting of a DNA strand by a restriction enzyme (restriction endonucleases), a process called restriction. Each restriction enzyme is highly specific, recognising a particular short DNA sequence, or restriction site, and cutting both DNA strands at specific points within this site. Most restriction sites are palindromic, (the sequence of nucleotides is the same on both strands when read in the 5' to 3' direction of each strand), and are four to eight nucleotides long. Many cuts are made by one restriction enzyme because of the chance repetition of these sequences in a long DNA molecule, yielding a set of restriction fragments. A particular DNA molecule will always yield the same set of restriction fragments when exposed to the same restriction enzyme. Restriction fragments can be analyzed using techniques such as gel electrophoresis or used in recombinant DNA technology. Applications In recombinant DNA technology, specific restriction endonucleases are used that will isolate a particular gene and cleave the sugar phosphate backbones at different points (retaining symmetry), so that the double-stranded restriction fragments have single-stranded ends. These short extensions, called sticky ends, can form hydrogen bonded base pairs with complementary sticky ends on any other DNA cut with the same enzyme (such as a bacterial plasmid). In agarose gel electrophoresis, the restriction fragments yield a band pattern characteristic of the original DNA molecule and restriction enzyme used, for example the relatively small DNA molecules of viruses and plasmids can be identified simply by their restriction fragment patterns. If the nucleotide differences of two different alleles occur within the restriction site of a particular restriction enzyme, digestion of segments of DNA from individuals with different alleles for that particular gene with that enzyme would produce different fragments and that will each yield different band patterns in gel electrophoresis. References Molecular biology Restriction enzymes
Restriction fragment
[ "Chemistry", "Biology" ]
401
[ "Genetics techniques", "Restriction enzymes", "Biochemistry", "Molecular biology" ]
2,941,579
https://en.wikipedia.org/wiki/Sergei%20Pankejeff
Sergei Konstantinovitch Pankejeff (; 24 December 1886 – 7 May 1979) was a Russian aristocrat from Odesa, Russian Empire. Pankejeff is best known for being a patient of Sigmund Freud, who gave him the pseudonym of Wolf Man (German: der Wolfsmann) to protect his identity, after a dream Pankejeff had of a tree full of white wolves. Biography Early life and education Pankejeff was born on the 24 December 1886 at his family's estate near Kakhovka on the river Dnieper. The Pankejeff family (Freud's German transliteration from the Russian; in English it would be transliterated as Pankeyev) was a wealthy family in St. Petersburg. His father was Konstantin Matviyovich Pankeyev and his mother was Anna Semenivna, née Shapovalova. Pankejeff's parents were married young and had a happy marriage, but his mother became sickly and was therefore somewhat absent from the lives of her two children. Pankejeff would later describe her as cold and lacking tenderness, though she would show special affection to him when he was sickly. His father Konstantin, while being a cultured man and a keen hunter, was also an alcoholic who suffered from depressive episodes. He had been treated by Moshe Wulff (a disciple of Freud). He would later be diagnosed by Kraepelin with manic depressive disorder. His mother (Pankejeff's grandmother) had fallen into a depressive state after the death of a daughter and was thought to have died of suicide, while a paternal uncle of Pankejeff's was diagnosed with paranoia by the neuropsychiatrist Korsakov and admitted to an asylum. Sergei and his sister Anna were brought up by two servants; Nanja and Grusha and an English governess named Miss Oven. Sergei's education would later be taken over by male tutors. Sergei attended a grammar school in Russia, but after the 1905 Russian Revolution he spent considerable time abroad studying. Psychological problems During his review of Freud's letters and other files, Jeffrey Moussaieff Masson uncovered notes for an unpublished paper by Freud's associate Ruth Mack Brunswick. Freud had asked her to review the Pankejeff case, and she discovered evidence that Pankejeff had been sexually abused by a family member during his childhood. In 1906, his older sister Anna committed suicide through the use of quicksilver while visiting the site of Mikhail Lermontov's fatal duel. She would die after two weeks of agony. By 1907, Sergei began to show signs of serious depression. Sergei's father Konstantin also suffered from depression, often connected to specific political happenings of the day, and committed suicide in 1907 by consuming an excess of sleeping medication, a few months after Sergei had left for Munich to seek treatment for his own ailment. While in Munich, Pankejeff saw many doctors and stayed voluntarily at a number of elite psychiatric hospitals. In the summers, he always visited Russia. During a stay in Kraepelin's sanatorium near Neuwittelsbach, he met a nurse who worked there, Theresa-Maria Keller, whom he fell in love with and wanted to marry. Pankejeff's family upon learning about the relationship was against it, as not only was Keller from a lower class, but also she was older than Pankejeff and a divorced woman with a daughter. The couple would marry in 1914. Der Wolfsmann (The Wolf Man) In January 1910, Pankejeff's physician Leonid Drosnes brought him to Vienna to have treatment with Freud. Pankejeff and Freud met with each other many times between February 1910 and July 1914, and a few times thereafter, including a brief psychoanalysis in 1919. Pankejeff's "nervous problems" included his inability to have bowel movements without the assistance of an enema, as well as debilitating depression. Initially, according to Freud, Pankejeff resisted opening up to full analysis, until Freud gave him a year deadline for analysis, prompting Pankejeff to give up his resistances. Freud's first publication on the "Wolf Man" was "From the History of an Infantile Neurosis" (Aus der Geschichte einer infantilen Neurose), written at the end of 1914, but not published until 1918. Freud's treatment of Pankejeff centered on a dream the latter had as a very young child which he described to Freud: I dreamt that it was night and that I was lying in bed. (My bed stood with its foot towards the window; in front of the window there was a row of old walnut trees. I know it was winter when I had the dream, and night-time.) Suddenly the window opened of its own accord, and I was terrified to see that some white wolves were sitting on the big walnut tree in front of the window. There were six or seven of them. The wolves were quite white, and looked more like foxes or sheep-dogs, for they had big tails like foxes and they had their ears pricked like dogs when they pay attention to something. In great terror, evidently of being eaten up by the wolves, I screamed and woke up. My nurse hurried to my bed, to see what had happened to me. It took quite a long while before I was convinced that it had only been a dream; I had had such a clear and life-like picture of the window opening and the wolves sitting on the tree. At last I grew quieter, felt as though I had escaped from some danger, and went to sleep again.(Freud 1918) Freud's eventual analysis (along with Pankejeff's input) of the dream was that it was the result of Pankejeff having witnessed a "primal scene" — his parents having sex a tergo or more ferarum ("from behind" or "doggy style") — at a very young age. Later in the paper, Freud posited the possibility that Pankejeff instead had witnessed copulation between animals, which was displaced to his parents. Pankejeff's dream played a major role in Freud's theory of psychosexual development, and along with Irma's injection (Freud's own dream, which launched dream analysis), it was one of the most important dreams for the developments of Freud's theories. Additionally, Pankejeff became one of the main cases used by Freud to prove the validity of psychoanalysis. It was the third detailed case study, after "Notes Upon a Case of Obsessional Neurosis" in 1908 (also known by its animal nickname "Rat Man"), that did not involve Freud analyzing himself, and which brought together the main aspects of catharsis, the unconscious, sexuality, and dream analysis put forward by Freud in his Studies on Hysteria (1895), The Interpretation of Dreams (1899), and his Three Essays on the Theory of Sexuality (1905). Later life Pankejeff later published his own memoir under Freud's given pseudonym and remained in contact with Freudian disciples until his own death (undergoing analysis for six decades despite Freud's pronouncement of his being "cured"), making him one of the longest-running famous patients in the history of psychoanalysis. A few years after finishing psychoanalysis with Freud, Pankejeff developed a psychotic delirium. He was observed in a street staring at his reflection in a mirror, convinced that after having consulted and been treated by a dermatologist to correct a minor injury on his nose, his dermatologist had left him with what he perceived to be a hole in his nose. This obsession with this perceived flaw led to an obsessive compulsion to look at himself “in every shop window; he carried a pocket mirror … his fate depended on what it revealed or was about to reveal." Ruth Mack Brunswick, a Freudian, explained the delusion as displaced castration anxiety. Having lost most of his family's wealth after the Russian Revolution, Pankejeff supported himself and his wife on his salary as an insurance clerk. The psychoanalytical movement also provided Pankejeff with financial support in Vienna; psychoanalysts like Kurt Eissler (a former student of Freud's) dissuaded Pankejeff from talking to any media. The reason for this was that Pankejeff being one of Freud's most famous "cured" patients and the fact revealing that he was still suffering from mental illness would hurt the reputation of Freud and psychoanalysis. Pankejeff was essentially bribed to keep quiet. In 1938, Pankejeff's wife committed suicide by inhaling gas. She had been depressed since the death of her daughter. As this coincided with the Anschluss; and the suicide wave among Jews who were trapped in Austria, research has also suggested that she was actually Jewish and that her suicide was prompted by her fear of the Nazis. Facing a major crisis and not being able to get help from Mack Brunswick who had fled to Paris Pankejeff approached Muriel Gardiner who managed to get him a visa to travel there. He would later follow her to London before returning to Vienna in 1938. Throughout the following decades, Pankejeff would go through some emotional crises which would ultimately lead to him becoming depressive. One of them being the death of Pankejeff's mother in 1953. Pankejeff would receive intermittent treatment for these episodes from various psychoanalysts, most frequently by the head of The Vienna Psychoanalytical Society Alfred von Winterstein and then by his successor, Wilhelm Solms-Rödelheim. Gardiner would also supply him with "wonder pills" (Dexamyl) to help Pankejeff alleviate his emotional turmoil. In July 1977, Pankejeff suffered a heart attack and then contracted pneumonia. He was admitted to the Steinhof psychiatric hospital in Vienna. Pankejeff broke his silence and agreed to talk to Karin Obholzer. Their conversations, which took place between January 1974 to September 1976, would later be recounted in the book "Conversations with the Wolf-Man Sixty years later" in 1980, after Pankejeff's death and per his own wishes. In Pankejeff's own words, his treatment by Freud had been "catastrophic." Death Pankejeff died on the 12th of May 1979 at the age of 92. Criticism of Freud's interpretation Critics, beginning with Otto Rank in 1926, have questioned the accuracy and efficacy of Freud's psychoanalytic treatment of Pankejeff. Similarly, in the mid-20th century, psychiatrist Hervey Cleckley dismissed Freud's diagnosis as far-fetched and entirely speculative. Dorpat has suggested that Freud's behavior in the Pankejeff case as an example of gaslighting (attempting to undermine someone's perceptions of reality). Daniel Goleman wrote in 1990 in the New York Times: Mária Török and Nicolas Abraham have reinterpreted the Wolf Man's case (in The wolf man's magic word, a cryptonymy), presenting their notion of "the crypt" and what they call “cryptonyms." They provide a different analysis of the case than Freud, whose conclusions they criticise. According to the authors, Pankejeff's statements hide other statements, while the actual content of his words can be illuminated by looking into his multi-lingual background. According to the authors, Pankejeff hid secrets concerning his older sister, and as the Wolf Man both wanted to forget and preserve these issues, he encrypted his older sister, as an idealised "other" in the heart of himself, and spoke these secrets out loud in a cryptic manner, through words hiding behind words, rebuses, wordplays etc. For example, in the Wolf Man's dream, where six or seven wolves were sitting in a tree outside his bedroom window, the expression "pack of six", a "sixter" = shiestorka: siestorka = sister, which gives the conclusion that his sister is placed in the centre of the trauma. The case forms a central part of the second plateau of Gilles Deleuze and Félix Guattari's A Thousand Plateaus, titled "One or Several Wolves?" In it, they repeat the accusation made in Anti-Oedipus that Freudian analysis is unduly reductive and that the unconscious is actually a "machinic assemblage". They argue that wolves are a case of the pack or multiplicity and that the dream was part of a schizoid experience. See also Notes References Whitney Davis, Drawing the Dream of the Wolves: Homosexuality, Interpretation and Freud's 'Wolf Man''' (Indianapolis: Indiana University Press, 1995), . Sigmund Freud, "From the History of an Infantile Neurosis" (1918), reprinted in Peter Gay, The Freud Reader (London: Vintage, 1995). Muriel Gardiner, The Wolf-Man and Sigmund Freud, London, Routledge, 1971 Karin Obholzer, The Wolf-Man Sixty Years Later, tr. M. Shaw, London, Routledge & P. Kegan, 1982, p. 36. Patrick J. Mahony, Cries of the WolfMan, New York : International Universities Press, 1984 "The Wolf-Man" [Sergei Pankejeff], The Wolf-Man (Pankejeff's memoirs, along with essays by Freud and Ruth Mack Brunswick), (New York: Basic Books, 1971). James L. Rice, Freud's Russia: National Identity in the Evolution of Psychoanalysis'' (New Brunswick, NJ: Transaction Publishers, 1993), 94–98. Torok Maria, Abraham Nicolas, The wolf man's magic word, a cryptonymy, 1986 External links Freud exhibit which contains images of Pankejeff 1886 births 1979 deaths Analysands of Ruth Mack Brunswick Analysands of Sigmund Freud Case studies by Sigmund Freud Dream People from Odesa Nobility from the Russian Empire Vasylivka, Odesa Raion Emigrants from the Russian Empire to Austria-Hungary
Sergei Pankejeff
[ "Biology" ]
2,974
[ "Dream", "Behavior", "Sleep" ]
2,941,610
https://en.wikipedia.org/wiki/Secondary%20deviance
From a sociological perspective, deviance is defined as the violation or drift from the accepted social norms. Secondary deviance is a stage in a theory of deviant identity formation. Introduced by Edwin Lemert in 1951, primary deviance is engaging in the initial act of deviance, he subsequently suggested that secondary deviance is the process of a deviant identity, integrating it into conceptions of self, potentially affecting the individual long term. For example, if a gang engaged in primary deviant behavior such as acts of violence, dishonesty or drug addiction, subsequently moved to legally deviant or criminal behavior, such as murder, this would be the stage of secondary deviance. Primary acts of deviance are common in everyone, however these are rarely thought of as criminal acts. Secondary deviance is much more likely to be considered as criminal in a social context. The act is likely to be labelled as deviant and criminal, which can have the effect of an individual internalizing that label and acting out accordingly. Lemert made another distinction between primary deviance and secondary deviance. Originally, there may not be a distinguished group of "deviant" people, but instead we all switch in and out of deviant behavior and a minority or these individuals starting the rule-breaking acts actually get the attention of others. In that very moment, a person is engaging in secondary deviance and it is said that they start following a more deviant path, or a deviant career - would be a set of roles shaped by the reactions of others in different situations. One's self-identity is vulnerable to all of the social judgement and criticism, and once more we see the continued interplay between the mind, self and society. As Erving Goffman (1961, 1963) showed, when an individual is labelled with a "discrediting" social attribute like shyness can often serve as a permanent mark on one's character. Deviancy Process Lemert listed out the process, which he decided, was the way that an individual becomes a secondary deviant. Primary Deviation; Societal Penalties; Further primary deviation; Stronger penalties and rejections; Further deviation; Crisis reached in the tolerance quotient, expressed in formal action by the community stigmatizing the deviant; Strengthening of the deviant conduct as a reaction to the stigmatizing and penalties, and; Ultimate acceptance of deviant and social status and efforts at adjustment on the basis of the associated role. Secondary deviance in culture and society Japan In Japan the punitive sanctions tend to be more important. The conditions in prison are harsh and some of the interrogated offenders have their rights disregarded. However, Japan has decreased their criminal recidivism rate. Criminal Recidivism is repetition of criminal behavior by an offender previously convicted and punished for an offence. It is also a measure of the effectiveness of rehabilitation programs or the deterrent effect of punishment. Explaining recidivism in the U.S, the labeling or secondary deviance perspective has some merit to go with it. Individuals in both countries have common point of views that are unique to their own cultures. As part of the individualism enforced in the United States, individuals are taught to seek self-importance and personal autonomy. One learns that he or she is not supposed to submit to others but should always try to ascend above and beyond others. The offender surfaces in the weak relationship built between the individual and the society in which it requires them to accept authority. The offender is usually prepared to test their social power and have a negative response in order to prove that they are still more important than society itself. Opposite to the conventional labeling perspective which is said to promote secondary deviance, social reaction actually provides for aggravating secondary deviance. In Japan, an individual appreciates the society in which he was born and raised. That tendency comes from what is learned culturally about integration with the society. The social reaction towards offenders in Japan has slighter recidivist consequences rather than in the United States. See also Deviance (sociology) Drug addiction Edwin Lemert Erving Goffman Interactionism Labelling theory Primary deviance Recidivism References Sociological terminology Deviance (sociology)
Secondary deviance
[ "Biology" ]
856
[ "Deviance (sociology)", "Behavior", "Human behavior" ]
2,941,630
https://en.wikipedia.org/wiki/Primary%20deviance
Primary deviance is the initial stage in defining deviant behavior. Prominent sociologist Edwin Lemert conceptualized primary deviance as engaging in the initial act of deviance. This is very common throughout society, as everyone takes part in basic form violations. Primary deviance does not result in a person internalizing a deviant identity, so one does not alter their self-concept to include this deviant identity. It is not until the act becomes labeled or tagged, that secondary deviation may materialize. According to Lemert, primary deviance is the acts that are carried out by the individual that allows them to carry the deviant label. Influences on primary deviant behavior Family and home life Parental support and the influence that parents have on their children is one of the highest contributors to the behaviors in adolescents. This is the primary stage in which behaviors, morals and values are learned and adopted. The guidance from parents is intended to mold and shape the behaviors that will qualify them to properly function in society. Praise, love, affection, encouragement and many other aspects of positive enforcement are some of the largest components of parental support. However, this is not all it takes to prevent deviant behaviors from forming and occurring. Parents must enforce "effective discipline, monitoring, and problem-solving techniques." Children who come from homes where parents do not enforce positive behaviors and do not punish deviant behaviors appropriately, are children who are likely to engage in deviant behaviors. This type of bond is considered weak and cause the child to act out and become deviant." Peers Strong parental bonds are essential to the social group that the child will choose to associate with. When there is little to no control in the home, no positive enforcement from parents, and the child does not have positive feelings towards schooling and education; they are more likely to associate with deviant peers. When associating with deviant peers, they are more accepting of deviant behaviors than if they chose another social group. This is why it is vital that the parent-child bond be strong because it will have an ultimate influence on the peers they choose and will have an influence on if they choose to engage in primary deviant behaviors as juveniles. Sociological contributors Frank Tannenbaum Frank Tannenbaum theorized that primary deviant behaviors may be innocent or fun for those committing the acts, but can become a nuisance and viewed as some form of delinquency to their parents, educators and even those in law enforcement. Tannenbaum distinguished two different types of deviancy. The first one being the initial act which the child considers to be of innocence but are labeled as deviant by the adult, this label is called "primary deviancy". The second is after they have been initially labeled, that they graduate to secondary deviance, in which both the adult and child agree that they are a deviant. Tannenbaum stated that the "over dramatization" of these deviant acts can cause one to be labeled and accept the label of being a deviant. Due to them accepting this label, they will eventually graduate from being a primary deviant to a secondary deviant thus committing greater crimes Theoretical approaches Labeling Theory The most prevalent theory as it relates to primary deviance was developed in the early 1960s by a group of sociologists and was titled "labeling theory". The labeling theory is a variant of symbolic interactionism. Symbolic interactionism is "a theoretical approach in sociology developed by George Herbert Mead. It emphasizes the roles of symbols and language as core elements of human interaction. Labeling theory according, to labeling theorists, is applied by those put in place to keep law and order, such as police officers, judges; etc. Those are the people who typically label the people who have violated some law or another. The label "deviant" does not come from the person who has committed the act,but someone who is more powerful than the person being labeled. This theory has been tremendously criticized for not being able to explain what causes deviance early on. However, the labeling theory's main focus is to explain how labeling relates and can cause secondary deviance. Anomie theory Robert Merton developed the anomie theory which was dedicated specifically to the causes of deviance. The word anomie was derived from the "Godfather of Sociology" Emile Durkheim. Anomie is "the breakdown of social norms that results from society's urging people to be ambitious but failing to provide them with legitimate opportunities to succeed". Merton theorized that society places substantial emphasis on the importance of achieving success. However, this goal is not attainable for people of all social classes.Due to the absence of resources for people of lower social classes to achieve a great level of success, Merton theorized that people are forced to commit deviant acts. Merton has labeled the deviants behavior's as innovation. Social learning theory The social learning theory theorizes that deviant behavior is learned through social interactions with other people. Edwin Sutherland developed an explanation for this theory which explains how one learns deviant behavior. This explanation is called differential association. Differential association Differential association theorizes that "If an individual associates with people who hold deviant ideas more than with people who embrace conventional ideas, the individual is likely to become deviant." The person that is presenting the deviant act is not always necessarily the deviant. The emphasis of differential association is that if someone is presented with the opportunity they will likely commit the act. Although someone may associate with both deviants and those who hold conventional ideas, if the deviant contacts outweigh the conventional contacts then deviancy is likely to occur. Differential Association's main key point refers particularly to the association aspect. Differential Association is theorized to be "the cause of deviance". Example of primary deviance Charles Manson One person who was labeled as deviant was the infamous murderer Charles Manson. Manson was born to a 16 year old Kathleen Maddox on November 12, 1934, in Cincinnati, Ohio. Manson's Father, Colonel Scott left Manson's mother to raise him alone. When Charles was seven years old, he was sent to live with his aunt and uncle in McMechen, West Virginia, after his mother was sentenced to five years in prison for armed robbery. Living with his aunt and uncle, Manson was given a more stable life that could allow him to be a positive contributor to society. However, the absence of his mother and the yearning he had for that motherly love and affection caused Manson to indulge in primary deviant behavior at a young age, which ultimately manifested into secondary deviance as he became older. Following the counsel of another uncle, a "mountain man" who lived in the mountains of Kentucky, Manson labeled himself a rebel. Manson's first act of deviancy began at the age of 9 years old when he set his school on fire and was sent to reform school. Throughout his adolescent, Manson was sent to several reform schools in hopes of rehabilitating him. Between 1942 and 1947 after her release from prison, Manson's mother was unable to properly care for him and was unsuccessful in finding him a foster home. She turned him over to the courts and allowed them to place him in an all boys school called Gibault School for Boys. Ten months later, Manson ran away from the Gibault School for Boys in hopes of rekindling a relationship he had longed for with his mother. After she rejected him Manson turned to a life of deviancy. Manson thrived off of high-consensus deviant acts such as burglary and theft. Manson was then sent to Father Flanagan's Boys' Home in 1949. After 4 days at Father Flanagan's Boys' Home, Manson ran away and pursued other deviant acts; such as auto theft, burglary, and armed robbery. Manson ran away 18 times from the National Training School for Boys where he alleged he was molested and beaten. This behavior in Manson's early years, caused this label of deviant to shadow him through his adult life, where he eventually graduated to Secondary deviance and eventually led the dangerous cult The Manson Family. References Criminology Deviance (sociology) Sociological theories
Primary deviance
[ "Biology" ]
1,663
[ "Deviance (sociology)", "Behavior", "Human behavior" ]
2,941,730
https://en.wikipedia.org/wiki/Powered%20speakers
Powered speakers, also known as self-powered speakers and active speakers, are loudspeakers that have built-in amplifiers. Powered speakers are used in a range of settings, including in sound reinforcement systems (used at live music concerts), both for the main speakers facing the audience and the monitor speakers facing the performers; by DJs performing at dance events and raves; in private homes as part of hi-fi or home cinema audio systems and as computer speakers. They can be connected directly to a mixing console or other low-level audio signal source without the need for an external amplifier. Some active speakers designed for sound reinforcement system use have an onboard mixing console and microphone preamplifier, which enables microphones to be connected directly to the speaker. Active speakers have several advantages, the most obvious being their compactness and simplicity. Additionally the amplifier(s) can be designed to closely match the optimal requirements of the speaker it will power; and the speaker designer is not required to include a passive crossover, decreasing production cost and possibly sound quality. Some also claim that the shorter distances between components can decrease external interference and increase fidelity; although this is highly dubious, and the reciprocal argument can also be made. Disadvantages include heavier loudspeaker enclosures; reduced reliability due to active electronic components within; and the need to supply both the audio signal and power to every unit separately, typically requiring two cables to be run to each speaker (as opposed to the single cable required with passive speakers and an external amplifier). Powered speakers are available with passive or active crossovers built into them. Since the early 2000s, powered speakers with active crossovers and other DSP have become common in sound reinforcement applications and in studio monitors. Home theater and add-on domestic/automotive subwoofers have used active powered speaker technology since the late 1980s. Differences The terms "powered" and "active" have been used interchangeably in loudspeaker designs, however, a differentiation may be made between the terms: In a passive loudspeaker system the low-level audio signal is first amplified by an external power amplifier before being sent to the loudspeaker where the signal is split by a passive crossover into the appropriate frequency ranges before being sent to the individual drivers. This design is common in home audio as well as professional concert audio. A powered loudspeaker works the same way as a passive speaker, but the power amplifier is built into the loudspeaker enclosure. This design is common in compact personal speakers such as those used to amplify portable digital music devices. In a fully active loudspeaker system each driver has its own dedicated power amplifier. The low-level audio signal is first sent through an active crossover to split the audio signal into the appropriate frequency ranges before being sent to the power amplifiers and then on to the drivers. This design is commonly seen in studio monitors and professional concert audio. Hybrid active designs exist such as having three drivers powered by two internal amplifiers. In this case, an active two-way crossover splits the audio signal, usually into low frequencies and mid-high frequencies. The low-frequency driver is driven by its own amplifier channel while the mid- and high-frequency drivers share an amplifier channel, the output of which is split by a passive two-way crossover. Integrated active systems The term "active speakers" can also refer to an integrated "active system" in which passive loudspeakers are mated to an external system of multiple amplifiers fed by an active crossover. These active loudspeaker systems may be built for professional concert touring such as the pioneering JM-3 system designed in 1971 by Harry McCune Sound Service, or they may be built for high-end home use such as various systems from Naim Audio and Linn Products. History Some of the first powered loudspeakers were JBL monitor speakers. With the addition of the SE401 Stereo Energizer, introduced in 1964, any pair of monitor speakers could be converted to self-powered operation with the second speaker powered by the first. The first studio monitor with an active crossover was the OY invented 1967 by Klein-Hummel. It was a hybrid three-way design with two internal amplifier channels. An early example of a bi-amplified powered studio monitor is the Altec 9846B, introduced in 1971, which combined the passive 9846-8A speaker with the new 771B Bi-amplifier with 60 watts for the woofer and 30 watts for the high frequency compression driver. In the late 1970s, Paramount Pictures contracted with AB Systems to design a powered speaker system. In 1980, Meyer Sound Laboratories produced an integrated active 2-way system, the passive UPA-1, which incorporated lessons John Meyer learned on the McCune JM-3. It used active electronics mounted outside of the loudspeaker enclosure, including Meyer's integrated active crossover with feedback comparator circuits determining the level of limiting, often connected to third-party customer-specified amplifiers. In 1990, Meyer produced its first powered speaker: the HD-1, a 2-way studio monitor with all internal electronics. In the early '90s, after years of dealing with the disadvantages of passive systems, especially varying gain settings on third-party amplifiers, John Meyer decided to stop making passive speakers and devote his company to active designs. Meyer said he "hired an ad agency to research how people felt about powered speakers for sound reinforcement, and they came back after a survey and said that nobody wanted them." Sound reinforcement system operators said they did not want loudspeakers in which they could not see the amplifier meters to determine whether the loudspeakers were working properly during a concert. Nevertheless, Meyer kept to his decision and produced the MSL-4 in 1994, the first powered loudspeaker intended for concert touring. The UPA-1 was converted to a self-powered configuration in 1996 and the rest of Meyer's product line followed suit. Advantages and disadvantages Fidelity The main benefit of active versus passive speakers is in the higher fidelity associated with active crossovers and multiple amplifiers, including less IMD, higher dynamic range and greater output power. The amplifiers within the loudspeaker enclosure may be ideally matched to the individual drivers, eliminating the need for each amplifier channel to operate in the entire audio bandpass. Driver characteristics such as power handling and impedance may be matched to amplifier capabilities. More specifically, active speakers have very short speaker cables inside the enclosure, so very little voltage and control is lost in long speaker cables with higher resistance. An active speaker often incorporates equalization tailored to each driver's response in the enclosure. This yields a flatter, more neutral sound. Limiting circuits (high-ratio audio compression circuits) can be incorporated to increase the likelihood of the driver surviving high-SPL use. Such limiters may be carefully matched to driver characteristics, resulting in a more dependable loudspeaker requiring less service. Distortion detection may be designed into the electronics to help determine the onset of protective limiting, reducing output distortion and eliminating clipping. Cabling Passive speakers need only one speaker cable but active speakers need two cables: an audio signal cable and an AC power cable. For multiple-enclosure high-power concert systems, the AC cabling is often smaller in diameter than the equivalent speaker cable bundles, so less copper is used. Some powered speaker manufacturers are now incorporating UHF or more frequently Wi-Fi wireless receivers so the speaker requires only an AC power cable. Weight A powered speaker usually weighs more than an equivalent passive speaker because the internal amplifier circuitry usually outweighs a speaker-level passive crossover. A loudspeaker associated with an integrated active system is even lighter because it has no internal crossover. A lightweight loudspeaker can be more easily carried and it is less of a load in rigging (flying). However, active speakers using lightweight Class-D amplifiers have narrowed the difference. Trucking for a sound system involves transporting all of the various components including amplifier racks, speaker cabling and loudspeaker enclosures. Overall shipping weight for an active loudspeaker system may be less than for a passive system because heavy passive speaker cable bundles are replaced by lighter AC cables and small diameter signal cables. Truck space and weight is reduced by eliminating amplifier racks. Cost The expense of a large concert active speaker system is less than the expense of an equivalent passive system. The passive system, or integrated active system with external electronics, requires separate components such as crossovers, equalizers, limiters and amplifiers, all mounted in rolling racks. Cabling for passive concert systems is heavy, large-diameter speaker cable, more expensive than smaller diameter AC power cables and much smaller audio signal cables. For high-end home use, active speakers usually cost more than passive speakers because of the additional amplifier channels required. Ease of use In professional audio and some home cinema and hi-fi applications, the active speaker may be easier to use because it eliminates the complexity of properly setting crossover frequencies, equalizer curves and limiter thresholds. Cabling is not as simple, however, because active speakers require two cables instead of one (an AC power cable and a cable with the signal, typically an XLR cable). In home audio, some audio engineers argue that a passive speaker, in which an unpowered speaker is connected to an amplifier, is the easiest to install and operate. Stability against improper use The amplifiers are adapted to the single loudspeakers employed, which avoids damage to the amplifier or loudspeaker due to mismatched or overloaded components. In certain cases, with passive speakers, tweeters may be destroyed due to strong distortions resulting from amplifier clipping due to overload resulting in overheating. This particularly occurs when the loudness button on a conventional amplifier is activated and the bass tone control is also turned up while the listening volume is high, a typical situation when hi-fi speakers are used at private parties. Servo-driven speakers By including a negative feedback loop in the amplifier-speaker system, distortion can be substantially reduced. If mounted at the speaker cone, the sensor is usually an accelerometer. It is possible to monitor the back emf generated by the driver voice coil as it moves within the magnetic gap. In either case, specialist amplifier designs are needed and so servo speakers are inherently powered speakers. Bass amplifiers Some bass amplifier manufacturers sell powered speakers designed for adding to the stage power of a combo bass amp. The user plugs a patch cord or XLR cable from the combo amp into the powered speaker. References Loudspeakers Loudspeaker technology Audio engineering Consumer electronics
Powered speakers
[ "Engineering" ]
2,170
[ "Electrical engineering", "Audio engineering" ]
2,941,827
https://en.wikipedia.org/wiki/Marker%20beacon
A marker beacon is a particular type of VHF radio beacon used in aviation, usually in conjunction with an instrument landing system (ILS), to give pilots a means to determine position along an established route to a destination such as a runway. According to Article 1.107 of the International Telecommunication Union's (ITU) ITU Radio Regulations (RR) a marker beacon is defined as "a transmitter in the aeronautical radionavigation service which radiates vertically a distinctive pattern for providing position information to aircraft". History From the 1930s until the 1950s, markers were used extensively along airways to provide an indication of an aircraft's specific position along the route, but from the 1960s they have become increasingly limited to ILS approach installations. They are now very gradually being phased out of service, especially in more developed parts of the world, as GPS and other technologies have made marker beacons increasingly redundant. Types There are three types of marker beacons that may be installed as part of their most common application—an instrument landing system. Outer marker The outer marker, which normally identifies the final approach fix (FAF), is situated on the same course/track as the localizer and the runway center-line, four to seven nautical miles before the runway threshold. It is typically located about inside the point where the glideslope intercepts the intermediate altitude and transmits a 400 Hz tone signal on a low-powered (3 watts), 75 MHz carrier signal. Its antenna is highly directional, and is pointed straight up. The valid signal area is a × ellipse (as measured above the antenna.) When the aircraft passes over the outer marker antenna, its marker beacon receiver detects the signal. The system gives the pilot a visual (blinking blue outer marker light) and aural (continuous series of audio tone morse code-like 'dashes') indication. Locator outer marker In the United States, the outer marker has often been combined with a non-directional beacon (NDB) to make a locator outer marker (LOM). An LOM is a navigation aid used as part of an instrument landing system (ILS) instrument approach for aircraft. Aircraft can navigate directly to the location using the NDB as well as be alerted when they fly over it by the beacon. The LOM is becoming less important now that GPS navigation is well established in the aviation community. Some countries, such as Canada, have abandoned marker beacons completely, replacing the outer marker with a NDB; and, more recently, with GPS fixes. In the U.S., LOMs are identified by two-letter Morse code modulated at 1020 Hz. LOMs use the first two letters of the parent ILS's identification. For example, at New York's JFK runway 31R the ILS identifier is I-RTH and the LOM ident is RT. If this facility were a locator middle marker (LMM) its identifier would be the last two letters, TH. Middle marker A middle marker works on the same principle as an outer marker. It is normally positioned 0.5 to before the runway threshold. When the aircraft is above the middle marker, the receiver's amber middle marker light starts blinking, and a repeating pattern of audible morse code-like dot-dashes at a frequency of 1,300 Hz in the headset. This alerts the pilots that they are descending through the CAT I decision altitude (typically above the ground level on the glideslope) and should have already initiated the missed approach if one of several visual cues has not been spotted. Inner marker Similar to the outer and middle markers, a inner marker located at the beginning (threshold) of the runway on some ILS approach systems (usually Category II and III) having decision heights of less than AGL. Triggers a flashing white light on the same marker beacon receiver used for the outer and middle markers; also a series of audio tone 'dots' at a frequency of 3,000 Hz in the headset. On some older marker beacon receivers, instead of the "O", "M" and "I" indicators (outer, middle, inner), the indicators are labeled "A" (or FM/Z), "O" and "M" (airway or Fan and Z marker, outer, middle). The airway marker was used to indicate reporting points along the centerline of now obsolete "Red" airways; this was sometimes a "fan" marker, whose radiated pattern was elongated at right angles across the airway course so an aircraft slightly off course would still receive it. A "Z" marker was sometimes located at low- or medium-frequency range sites to accurately denote station passage. As airway beacons used the same 3,000 Hz audio frequency as the inner marker, the "A" indicator on older receivers can be used to detect the inner marker. Back course marker A back course marker (BC) normally indicates the ILS back-course final-approach fix where approach descent is commenced. It is identified by pairs of Morse-code "dots" at 3000 Hz (95 pairs per minute), which will trigger the white light on a marker beacon indicator, but with a different audio rhythm from an inner marker or en-route marker. Fan marker The term fan marker refers to the older type of beacons used mostly for en-route navigation. Fan-type marker beacons were sometimes part of a non-precision approach and are identified by a flashing white light and a repeating dot-dash-dot signal. Recent editions of the FAA's AIM publication no longer mention fan markers. In August 2024 nineteen fan markers remain in the FAA database with seven listed as "DECOMMISSIONED". See also AN/MRN-3 Transponder Landing System (TLS) Index of aviation articles References External links 2008 Federal Radionavigation Plan This FRS publication has detailed description of ILS and other navigational systems. Operational Notes on Visual-Aural Radio Range & Associated Marker Beacons a 1953 publication. International Telecommunication Union (ITU) Radio stations and systems ITU Radio navigation Aeronautical navigation systems Aircraft landing systems Beacons
Marker beacon
[ "Technology" ]
1,263
[ "Aircraft instruments", "Aircraft landing systems" ]
2,941,860
https://en.wikipedia.org/wiki/Inositol%20phosphate
Inositol phosphates are a group of mono- to hexaphosphorylated inositols. Each form of inositol phosphate is distinguished by the number and position of the phosphate group on the inositol ring. inositol monophosphate (IP) inositol bisphosphate (IP2) inositol trisphosphate (IP3) inositol tetrakisphosphate (IP4) inositol pentakisphosphate (IP5) inositol hexaphosphate (IP6) also known as phytic acid, or phytate (as a salt). A series of phosphorylation and dephosphorylation reactions are carried out by at least 19 phosphoinositide kinases and 28 phosphoinositide phosphatase enzymes allowing for the inter-conversion between the inositol phosphate compounds based on cellular demand. Inositol phosphates play a crucial role in various signal transduction pathways responsible for cell growth and differentiation, apoptosis, DNA repair, RNA export, regeneration of ATP and more. Functions Inositol trisphosphate The inositol-phospholipid signaling pathway is responsible for the generation of IP3 through the cleavage of Phosphatidylinositol 4,5-bisphosphate (PIP2) found in the lipid bi-layer of the plasma membrane by phospholipase C in response to either receptor tyrosine kinase or Gq alpha subunit-G protein-coupled receptor signaling. Soluble inositol trisphosphate (IP3) is able to rapidly diffuse into the cytosol and bind to the inositol trisphosphate receptor (InsP3Rs) calcium channels located in the endoplasmic reticulum. This releases calcium into the cytosol, serving as a rapid and potent signal for various cellular processes. Further reading: Function of calcium in humans Other Inositol tetra-, penta-, and hexa-phosphates have been implicated in gene expression. Inositol hexaphosphate Inositol hexaphosphate (IP6) is the most abundant inositol phosphate isomer found. IP6 is solely involved in various biological activities such as neurotransmission, immune response, regulation of kinase and phosphatase proteins as well as activation of calcium channels. IP6 is also involved in ATP regeneration seen in plants as well as insulin exocytosis in pancreatic β cells. Inositol hexaphosphate also facilitates the formation of the six-helix bundle and assembly of the immature HIV-1 Gag lattice. IP6 makes ionic contacts with two rings of lysine residues at the centre of the Gag hexamer. Proteolytic cleavage then unmasks an alternative binding site, where IP6 interaction promotes the assembly of the mature capsid lattice. These studies identify IP6 as a naturally occurring small molecule that promotes both assembly and maturation of HIV-1. References External links Organophosphates Phosphate esters Signal transduction Inositol
Inositol phosphate
[ "Chemistry", "Biology" ]
677
[ "Neurochemistry", "Inositol", "Biochemistry", "Signal transduction" ]